2025-06-22 19:04:39.170334 | Job console starting 2025-06-22 19:04:39.219424 | Updating git repos 2025-06-22 19:04:39.321551 | Cloning repos into workspace 2025-06-22 19:04:39.509099 | Restoring repo states 2025-06-22 19:04:39.534613 | Merging changes 2025-06-22 19:04:39.534634 | Checking out repos 2025-06-22 19:04:39.824808 | Preparing playbooks 2025-06-22 19:04:40.446403 | Running Ansible setup 2025-06-22 19:04:44.870281 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-22 19:04:45.711136 | 2025-06-22 19:04:45.711316 | PLAY [Base pre] 2025-06-22 19:04:45.736084 | 2025-06-22 19:04:45.736249 | TASK [Setup log path fact] 2025-06-22 19:04:45.767118 | orchestrator | ok 2025-06-22 19:04:45.786409 | 2025-06-22 19:04:45.786572 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-22 19:04:45.826932 | orchestrator | ok 2025-06-22 19:04:45.839105 | 2025-06-22 19:04:45.839246 | TASK [emit-job-header : Print job information] 2025-06-22 19:04:45.896503 | # Job Information 2025-06-22 19:04:45.896787 | Ansible Version: 2.16.14 2025-06-22 19:04:45.896848 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-06-22 19:04:45.896908 | Pipeline: post 2025-06-22 19:04:45.896950 | Executor: 521e9411259a 2025-06-22 19:04:45.896988 | Triggered by: https://github.com/osism/testbed/commit/206b120f5efd9d8b6e0d281a8cd1d66810029b10 2025-06-22 19:04:45.897028 | Event ID: b9f3f6ba-4f9b-11f0-9af7-f95284c403f5 2025-06-22 19:04:45.907705 | 2025-06-22 19:04:45.907884 | LOOP [emit-job-header : Print node information] 2025-06-22 19:04:46.022981 | orchestrator | ok: 2025-06-22 19:04:46.023191 | orchestrator | # Node Information 2025-06-22 19:04:46.023226 | orchestrator | Inventory Hostname: orchestrator 2025-06-22 19:04:46.023253 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-22 19:04:46.023276 | orchestrator | Username: zuul-testbed02 2025-06-22 19:04:46.023296 | orchestrator | Distro: Debian 12.11 2025-06-22 19:04:46.023322 | orchestrator | Provider: static-testbed 2025-06-22 19:04:46.023343 | orchestrator | Region: 2025-06-22 19:04:46.023381 | orchestrator | Label: testbed-orchestrator 2025-06-22 19:04:46.023403 | orchestrator | Product Name: OpenStack Nova 2025-06-22 19:04:46.023422 | orchestrator | Interface IP: 81.163.193.140 2025-06-22 19:04:46.037570 | 2025-06-22 19:04:46.037731 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-22 19:04:46.519018 | orchestrator -> localhost | changed 2025-06-22 19:04:46.527717 | 2025-06-22 19:04:46.527872 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-22 19:04:47.631315 | orchestrator -> localhost | changed 2025-06-22 19:04:47.656857 | 2025-06-22 19:04:47.657030 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-22 19:04:47.958008 | orchestrator -> localhost | ok 2025-06-22 19:04:47.965600 | 2025-06-22 19:04:47.965742 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-22 19:04:48.002996 | orchestrator | ok 2025-06-22 19:04:48.024056 | orchestrator | included: /var/lib/zuul/builds/a65c4017b01b42e2a4146bccaa6b7607/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-22 19:04:48.032657 | 2025-06-22 19:04:48.032815 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-22 19:04:48.890759 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-22 19:04:48.891182 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a65c4017b01b42e2a4146bccaa6b7607/work/a65c4017b01b42e2a4146bccaa6b7607_id_rsa 2025-06-22 19:04:48.891257 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a65c4017b01b42e2a4146bccaa6b7607/work/a65c4017b01b42e2a4146bccaa6b7607_id_rsa.pub 2025-06-22 19:04:48.891306 | orchestrator -> localhost | The key fingerprint is: 2025-06-22 19:04:48.891351 | orchestrator -> localhost | SHA256:tvz9gmdyfzbCyJ1JyRnTcT0FVZYP2vbZeuVHJbAaSiU zuul-build-sshkey 2025-06-22 19:04:48.891417 | orchestrator -> localhost | The key's randomart image is: 2025-06-22 19:04:48.891478 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-22 19:04:48.891518 | orchestrator -> localhost | | .oX| 2025-06-22 19:04:48.891553 | orchestrator -> localhost | | E . . .=o| 2025-06-22 19:04:48.891577 | orchestrator -> localhost | | o =..=| 2025-06-22 19:04:48.891600 | orchestrator -> localhost | | . . oo+.o| 2025-06-22 19:04:48.891646 | orchestrator -> localhost | | .S. o..=o+| 2025-06-22 19:04:48.891680 | orchestrator -> localhost | | o... = .=| 2025-06-22 19:04:48.891706 | orchestrator -> localhost | | o ..= o+.| 2025-06-22 19:04:48.891731 | orchestrator -> localhost | | .o+=*..*| 2025-06-22 19:04:48.891756 | orchestrator -> localhost | | .=.+++o| 2025-06-22 19:04:48.891780 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-22 19:04:48.891847 | orchestrator -> localhost | ok: Runtime: 0:00:00.314612 2025-06-22 19:04:48.900067 | 2025-06-22 19:04:48.900192 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-22 19:04:48.920949 | orchestrator | ok 2025-06-22 19:04:48.932459 | orchestrator | included: /var/lib/zuul/builds/a65c4017b01b42e2a4146bccaa6b7607/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-22 19:04:48.956403 | 2025-06-22 19:04:48.956590 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-22 19:04:48.981827 | orchestrator | skipping: Conditional result was False 2025-06-22 19:04:48.996005 | 2025-06-22 19:04:48.996163 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-22 19:04:49.594875 | orchestrator | changed 2025-06-22 19:04:49.604696 | 2025-06-22 19:04:49.604834 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-22 19:04:49.896773 | orchestrator | ok 2025-06-22 19:04:49.905636 | 2025-06-22 19:04:49.905785 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-22 19:04:50.331828 | orchestrator | ok 2025-06-22 19:04:50.340040 | 2025-06-22 19:04:50.340158 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-22 19:04:50.766879 | orchestrator | ok 2025-06-22 19:04:50.773514 | 2025-06-22 19:04:50.773602 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-22 19:04:50.796708 | orchestrator | skipping: Conditional result was False 2025-06-22 19:04:50.808442 | 2025-06-22 19:04:50.808573 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-22 19:04:51.219698 | orchestrator -> localhost | changed 2025-06-22 19:04:51.234969 | 2025-06-22 19:04:51.235090 | TASK [add-build-sshkey : Add back temp key] 2025-06-22 19:04:51.551582 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a65c4017b01b42e2a4146bccaa6b7607/work/a65c4017b01b42e2a4146bccaa6b7607_id_rsa (zuul-build-sshkey) 2025-06-22 19:04:51.551790 | orchestrator -> localhost | ok: Runtime: 0:00:00.019304 2025-06-22 19:04:51.559640 | 2025-06-22 19:04:51.559746 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-22 19:04:51.994869 | orchestrator | ok 2025-06-22 19:04:52.000942 | 2025-06-22 19:04:52.001094 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-22 19:04:52.024726 | orchestrator | skipping: Conditional result was False 2025-06-22 19:04:52.064229 | 2025-06-22 19:04:52.064337 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-22 19:04:52.463736 | orchestrator | ok 2025-06-22 19:04:52.477922 | 2025-06-22 19:04:52.478022 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-22 19:04:52.521932 | orchestrator | ok 2025-06-22 19:04:52.531109 | 2025-06-22 19:04:52.531209 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-22 19:04:52.818168 | orchestrator -> localhost | ok 2025-06-22 19:04:52.825132 | 2025-06-22 19:04:52.825232 | TASK [validate-host : Collect information about the host] 2025-06-22 19:04:53.972294 | orchestrator | ok 2025-06-22 19:04:53.993495 | 2025-06-22 19:04:53.993719 | TASK [validate-host : Sanitize hostname] 2025-06-22 19:04:54.061284 | orchestrator | ok 2025-06-22 19:04:54.069914 | 2025-06-22 19:04:54.070034 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-22 19:04:54.560115 | orchestrator -> localhost | changed 2025-06-22 19:04:54.566177 | 2025-06-22 19:04:54.566263 | TASK [validate-host : Collect information about zuul worker] 2025-06-22 19:04:54.986652 | orchestrator | ok 2025-06-22 19:04:54.995739 | 2025-06-22 19:04:54.995867 | TASK [validate-host : Write out all zuul information for each host] 2025-06-22 19:04:55.485004 | orchestrator -> localhost | changed 2025-06-22 19:04:55.504645 | 2025-06-22 19:04:55.504765 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-22 19:04:55.802148 | orchestrator | ok 2025-06-22 19:04:55.810779 | 2025-06-22 19:04:55.810915 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-22 19:05:31.538254 | orchestrator | changed: 2025-06-22 19:05:31.538504 | orchestrator | .d..t...... src/ 2025-06-22 19:05:31.538543 | orchestrator | .d..t...... src/github.com/ 2025-06-22 19:05:31.538568 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-22 19:05:31.538590 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-22 19:05:31.538612 | orchestrator | RedHat.yml 2025-06-22 19:05:31.549880 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-22 19:05:31.549897 | orchestrator | RedHat.yml 2025-06-22 19:05:31.549949 | orchestrator | = 2.2.0"... 2025-06-22 19:05:48.296361 | orchestrator | 19:05:48.296 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-06-22 19:05:48.367049 | orchestrator | 19:05:48.366 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-06-22 19:05:49.428984 | orchestrator | 19:05:49.428 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-22 19:05:50.296497 | orchestrator | 19:05:50.296 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-22 19:05:51.452647 | orchestrator | 19:05:51.452 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-22 19:05:52.308083 | orchestrator | 19:05:52.307 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-22 19:05:53.467381 | orchestrator | 19:05:53.467 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.2.0... 2025-06-22 19:05:54.639735 | orchestrator | 19:05:54.637 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.2.0 (signed, key ID 4F80527A391BEFD2) 2025-06-22 19:05:54.640300 | orchestrator | 19:05:54.637 STDOUT terraform: Providers are signed by their developers. 2025-06-22 19:05:54.640315 | orchestrator | 19:05:54.637 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-22 19:05:54.640321 | orchestrator | 19:05:54.637 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-22 19:05:54.640327 | orchestrator | 19:05:54.638 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-22 19:05:54.640334 | orchestrator | 19:05:54.638 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-22 19:05:54.640341 | orchestrator | 19:05:54.638 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-22 19:05:54.640345 | orchestrator | 19:05:54.638 STDOUT terraform: you run "tofu init" in the future. 2025-06-22 19:05:54.640350 | orchestrator | 19:05:54.638 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-22 19:05:54.640354 | orchestrator | 19:05:54.638 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-22 19:05:54.640358 | orchestrator | 19:05:54.638 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-22 19:05:54.640362 | orchestrator | 19:05:54.638 STDOUT terraform: should now work. 2025-06-22 19:05:54.640366 | orchestrator | 19:05:54.638 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-22 19:05:54.640369 | orchestrator | 19:05:54.638 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-22 19:05:54.640374 | orchestrator | 19:05:54.639 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-22 19:05:54.748708 | orchestrator | 19:05:54.747 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-22 19:05:54.748775 | orchestrator | 19:05:54.747 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-06-22 19:05:54.946464 | orchestrator | 19:05:54.946 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-22 19:05:54.946571 | orchestrator | 19:05:54.946 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-22 19:05:54.946581 | orchestrator | 19:05:54.946 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-22 19:05:54.946588 | orchestrator | 19:05:54.946 STDOUT terraform: for this configuration. 2025-06-22 19:05:55.140769 | orchestrator | 19:05:55.139 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-22 19:05:55.140868 | orchestrator | 19:05:55.139 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-06-22 19:05:55.243401 | orchestrator | 19:05:55.243 STDOUT terraform: ci.auto.tfvars 2025-06-22 19:05:55.255996 | orchestrator | 19:05:55.255 STDOUT terraform: default_custom.tf 2025-06-22 19:05:55.392634 | orchestrator | 19:05:55.391 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-22 19:05:56.306517 | orchestrator | 19:05:56.306 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-22 19:05:56.834701 | orchestrator | 19:05:56.834 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-22 19:05:57.095753 | orchestrator | 19:05:57.087 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-22 19:05:57.095802 | orchestrator | 19:05:57.087 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-22 19:05:57.095808 | orchestrator | 19:05:57.087 STDOUT terraform:  + create 2025-06-22 19:05:57.095814 | orchestrator | 19:05:57.087 STDOUT terraform:  <= read (data resources) 2025-06-22 19:05:57.095820 | orchestrator | 19:05:57.087 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-22 19:05:57.095824 | orchestrator | 19:05:57.087 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-22 19:05:57.095828 | orchestrator | 19:05:57.087 STDOUT terraform:  # (config refers to values not yet known) 2025-06-22 19:05:57.095832 | orchestrator | 19:05:57.087 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-22 19:05:57.095836 | orchestrator | 19:05:57.087 STDOUT terraform:  + checksum = (known after apply) 2025-06-22 19:05:57.095840 | orchestrator | 19:05:57.087 STDOUT terraform:  + created_at = (known after apply) 2025-06-22 19:05:57.095844 | orchestrator | 19:05:57.087 STDOUT terraform:  + file = (known after apply) 2025-06-22 19:05:57.095847 | orchestrator | 19:05:57.087 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.095851 | orchestrator | 19:05:57.087 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.095869 | orchestrator | 19:05:57.087 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-22 19:05:57.095873 | orchestrator | 19:05:57.087 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-22 19:05:57.095877 | orchestrator | 19:05:57.087 STDOUT terraform:  + most_recent = true 2025-06-22 19:05:57.095880 | orchestrator | 19:05:57.087 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:05:57.095884 | orchestrator | 19:05:57.087 STDOUT terraform:  + protected = (known after apply) 2025-06-22 19:05:57.095888 | orchestrator | 19:05:57.087 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.095892 | orchestrator | 19:05:57.088 STDOUT terraform:  + schema = (known after apply) 2025-06-22 19:05:57.095895 | orchestrator | 19:05:57.088 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-22 19:05:57.095899 | orchestrator | 19:05:57.088 STDOUT terraform:  + tags = (known after apply) 2025-06-22 19:05:57.095903 | orchestrator | 19:05:57.088 STDOUT terraform:  + updated_at = (known after apply) 2025-06-22 19:05:57.095907 | orchestrator | 19:05:57.088 STDOUT terraform:  } 2025-06-22 19:05:57.095913 | orchestrator | 19:05:57.088 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-22 19:05:57.095917 | orchestrator | 19:05:57.088 STDOUT terraform:  # (config refers to values not yet known) 2025-06-22 19:05:57.095944 | orchestrator | 19:05:57.088 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-22 19:05:57.095948 | orchestrator | 19:05:57.088 STDOUT terraform:  + checksum = (known after apply) 2025-06-22 19:05:57.095952 | orchestrator | 19:05:57.088 STDOUT terraform:  + created_at = (known after apply) 2025-06-22 19:05:57.095955 | orchestrator | 19:05:57.088 STDOUT terraform:  + file = (known after apply) 2025-06-22 19:05:57.095959 | orchestrator | 19:05:57.088 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.095963 | orchestrator | 19:05:57.088 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.095967 | orchestrator | 19:05:57.088 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-22 19:05:57.095970 | orchestrator | 19:05:57.088 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-22 19:05:57.095980 | orchestrator | 19:05:57.088 STDOUT terraform:  + most_recent = true 2025-06-22 19:05:57.095984 | orchestrator | 19:05:57.088 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:05:57.095988 | orchestrator | 19:05:57.088 STDOUT terraform:  + protected = (known after apply) 2025-06-22 19:05:57.095992 | orchestrator | 19:05:57.088 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.096007 | orchestrator | 19:05:57.088 STDOUT terraform:  + schema = (known after apply) 2025-06-22 19:05:57.096011 | orchestrator | 19:05:57.088 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-22 19:05:57.096015 | orchestrator | 19:05:57.088 STDOUT terraform:  + tags = (known after apply) 2025-06-22 19:05:57.096019 | orchestrator | 19:05:57.088 STDOUT terraform:  + updated_at = (known after apply) 2025-06-22 19:05:57.096022 | orchestrator | 19:05:57.088 STDOUT terraform:  } 2025-06-22 19:05:57.096026 | orchestrator | 19:05:57.088 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-22 19:05:57.096034 | orchestrator | 19:05:57.088 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-22 19:05:57.096038 | orchestrator | 19:05:57.088 STDOUT terraform:  + content = (known after apply) 2025-06-22 19:05:57.096042 | orchestrator | 19:05:57.088 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 19:05:57.096046 | orchestrator | 19:05:57.088 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 19:05:57.096050 | orchestrator | 19:05:57.088 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 19:05:57.096053 | orchestrator | 19:05:57.088 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 19:05:57.096057 | orchestrator | 19:05:57.088 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 19:05:57.096061 | orchestrator | 19:05:57.088 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 19:05:57.096065 | orchestrator | 19:05:57.088 STDOUT terraform:  + directory_permission = "0777" 2025-06-22 19:05:57.096069 | orchestrator | 19:05:57.088 STDOUT terraform:  + file_permission = "0644" 2025-06-22 19:05:57.096073 | orchestrator | 19:05:57.088 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-22 19:05:57.096076 | orchestrator | 19:05:57.088 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096080 | orchestrator | 19:05:57.088 STDOUT terraform:  } 2025-06-22 19:05:57.096084 | orchestrator | 19:05:57.088 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-22 19:05:57.096087 | orchestrator | 19:05:57.088 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-22 19:05:57.096091 | orchestrator | 19:05:57.088 STDOUT terraform:  + content = (known after apply) 2025-06-22 19:05:57.096095 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 19:05:57.096099 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 19:05:57.096103 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 19:05:57.096106 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 19:05:57.096110 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 19:05:57.096114 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 19:05:57.096117 | orchestrator | 19:05:57.089 STDOUT terraform:  + directory_permission = "0777" 2025-06-22 19:05:57.096121 | orchestrator | 19:05:57.089 STDOUT terraform:  + file_permission = "0644" 2025-06-22 19:05:57.096125 | orchestrator | 19:05:57.089 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-22 19:05:57.096128 | orchestrator | 19:05:57.089 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096132 | orchestrator | 19:05:57.089 STDOUT terraform:  } 2025-06-22 19:05:57.096143 | orchestrator | 19:05:57.089 STDOUT terraform:  # local_file.inventory will be created 2025-06-22 19:05:57.096147 | orchestrator | 19:05:57.089 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-22 19:05:57.096151 | orchestrator | 19:05:57.089 STDOUT terraform:  + content = (known after apply) 2025-06-22 19:05:57.096161 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 19:05:57.096165 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 19:05:57.096171 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 19:05:57.096175 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 19:05:57.096179 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 19:05:57.096182 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 19:05:57.096186 | orchestrator | 19:05:57.089 STDOUT terraform:  + directory_permission = "0777" 2025-06-22 19:05:57.096190 | orchestrator | 19:05:57.089 STDOUT terraform:  + file_permission = "0644" 2025-06-22 19:05:57.096194 | orchestrator | 19:05:57.089 STDOUT terraform:  + filename = "inventory.ci" 2025-06-22 19:05:57.096197 | orchestrator | 19:05:57.089 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096201 | orchestrator | 19:05:57.089 STDOUT terraform:  } 2025-06-22 19:05:57.096205 | orchestrator | 19:05:57.089 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-22 19:05:57.096209 | orchestrator | 19:05:57.089 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-22 19:05:57.096213 | orchestrator | 19:05:57.089 STDOUT terraform:  + content = (sensitive value) 2025-06-22 19:05:57.096217 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 19:05:57.096220 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 19:05:57.096224 | orchestrator | 19:05:57.089 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 19:05:57.096228 | orchestrator | 19:05:57.090 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 19:05:57.096232 | orchestrator | 19:05:57.090 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 19:05:57.096235 | orchestrator | 19:05:57.090 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 19:05:57.096239 | orchestrator | 19:05:57.090 STDOUT terraform:  + directory_permission = "0700" 2025-06-22 19:05:57.096243 | orchestrator | 19:05:57.090 STDOUT terraform:  + file_permission = "0600" 2025-06-22 19:05:57.096247 | orchestrator | 19:05:57.090 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-22 19:05:57.096250 | orchestrator | 19:05:57.090 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096254 | orchestrator | 19:05:57.090 STDOUT terraform:  } 2025-06-22 19:05:57.096258 | orchestrator | 19:05:57.090 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-22 19:05:57.096262 | orchestrator | 19:05:57.090 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-22 19:05:57.096265 | orchestrator | 19:05:57.090 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096269 | orchestrator | 19:05:57.090 STDOUT terraform:  } 2025-06-22 19:05:57.096273 | orchestrator | 19:05:57.090 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-22 19:05:57.096283 | orchestrator | 19:05:57.090 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-22 19:05:57.096287 | orchestrator | 19:05:57.090 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.096291 | orchestrator | 19:05:57.090 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.096294 | orchestrator | 19:05:57.090 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096298 | orchestrator | 19:05:57.090 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.096302 | orchestrator | 19:05:57.090 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.096306 | orchestrator | 19:05:57.090 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-22 19:05:57.096310 | orchestrator | 19:05:57.090 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.096313 | orchestrator | 19:05:57.090 STDOUT terraform:  + size = 80 2025-06-22 19:05:57.096320 | orchestrator | 19:05:57.090 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.096324 | orchestrator | 19:05:57.090 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.096328 | orchestrator | 19:05:57.090 STDOUT terraform:  } 2025-06-22 19:05:57.096332 | orchestrator | 19:05:57.090 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-22 19:05:57.096335 | orchestrator | 19:05:57.090 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:05:57.096339 | orchestrator | 19:05:57.090 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.096343 | orchestrator | 19:05:57.091 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.096347 | orchestrator | 19:05:57.091 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096350 | orchestrator | 19:05:57.091 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.096354 | orchestrator | 19:05:57.091 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.096358 | orchestrator | 19:05:57.091 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-22 19:05:57.096362 | orchestrator | 19:05:57.091 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.096365 | orchestrator | 19:05:57.091 STDOUT terraform:  + size = 80 2025-06-22 19:05:57.096369 | orchestrator | 19:05:57.091 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.096373 | orchestrator | 19:05:57.091 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.096376 | orchestrator | 19:05:57.091 STDOUT terraform:  } 2025-06-22 19:05:57.096380 | orchestrator | 19:05:57.091 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-22 19:05:57.096384 | orchestrator | 19:05:57.091 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:05:57.096388 | orchestrator | 19:05:57.091 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.096395 | orchestrator | 19:05:57.091 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.096399 | orchestrator | 19:05:57.091 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096403 | orchestrator | 19:05:57.091 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.096406 | orchestrator | 19:05:57.091 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.096410 | orchestrator | 19:05:57.091 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-22 19:05:57.096414 | orchestrator | 19:05:57.091 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.096418 | orchestrator | 19:05:57.091 STDOUT terraform:  + size = 80 2025-06-22 19:05:57.096422 | orchestrator | 19:05:57.091 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.096426 | orchestrator | 19:05:57.091 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.096430 | orchestrator | 19:05:57.091 STDOUT terraform:  } 2025-06-22 19:05:57.096433 | orchestrator | 19:05:57.091 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-22 19:05:57.096437 | orchestrator | 19:05:57.091 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:05:57.096441 | orchestrator | 19:05:57.091 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.096447 | orchestrator | 19:05:57.091 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.096451 | orchestrator | 19:05:57.091 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096455 | orchestrator | 19:05:57.091 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.096458 | orchestrator | 19:05:57.091 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.096462 | orchestrator | 19:05:57.091 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-22 19:05:57.096468 | orchestrator | 19:05:57.091 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.096472 | orchestrator | 19:05:57.092 STDOUT terraform:  + size = 80 2025-06-22 19:05:57.096476 | orchestrator | 19:05:57.092 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.096480 | orchestrator | 19:05:57.092 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.096483 | orchestrator | 19:05:57.092 STDOUT terraform:  } 2025-06-22 19:05:57.096487 | orchestrator | 19:05:57.092 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-22 19:05:57.096491 | orchestrator | 19:05:57.092 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:05:57.096495 | orchestrator | 19:05:57.092 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.096498 | orchestrator | 19:05:57.092 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.096502 | orchestrator | 19:05:57.092 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096506 | orchestrator | 19:05:57.092 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.096509 | orchestrator | 19:05:57.092 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.096517 | orchestrator | 19:05:57.092 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-22 19:05:57.096520 | orchestrator | 19:05:57.092 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.096524 | orchestrator | 19:05:57.092 STDOUT terraform:  + size = 80 2025-06-22 19:05:57.096528 | orchestrator | 19:05:57.092 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.096532 | orchestrator | 19:05:57.092 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.096535 | orchestrator | 19:05:57.092 STDOUT terraform:  } 2025-06-22 19:05:57.096539 | orchestrator | 19:05:57.092 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-22 19:05:57.096545 | orchestrator | 19:05:57.092 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:05:57.096549 | orchestrator | 19:05:57.092 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.096553 | orchestrator | 19:05:57.092 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.096557 | orchestrator | 19:05:57.092 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096560 | orchestrator | 19:05:57.092 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.096564 | orchestrator | 19:05:57.092 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.096568 | orchestrator | 19:05:57.092 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-22 19:05:57.096571 | orchestrator | 19:05:57.092 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.096575 | orchestrator | 19:05:57.092 STDOUT terraform:  + size = 80 2025-06-22 19:05:57.096579 | orchestrator | 19:05:57.092 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.096583 | orchestrator | 19:05:57.092 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.096586 | orchestrator | 19:05:57.092 STDOUT terraform:  } 2025-06-22 19:05:57.096590 | orchestrator | 19:05:57.092 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-22 19:05:57.096594 | orchestrator | 19:05:57.092 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:05:57.096598 | orchestrator | 19:05:57.092 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.096601 | orchestrator | 19:05:57.092 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.096605 | orchestrator | 19:05:57.092 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096609 | orchestrator | 19:05:57.092 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.096615 | orchestrator | 19:05:57.093 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.096619 | orchestrator | 19:05:57.093 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-22 19:05:57.096623 | orchestrator | 19:05:57.093 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.096627 | orchestrator | 19:05:57.093 STDOUT terraform:  + size = 80 2025-06-22 19:05:57.096634 | orchestrator | 19:05:57.093 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.096638 | orchestrator | 19:05:57.093 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.096641 | orchestrator | 19:05:57.093 STDOUT terraform:  } 2025-06-22 19:05:57.096645 | orchestrator | 19:05:57.093 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-22 19:05:57.096649 | orchestrator | 19:05:57.093 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:05:57.096657 | orchestrator | 19:05:57.093 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.096663 | orchestrator | 19:05:57.093 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.096668 | orchestrator | 19:05:57.093 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096674 | orchestrator | 19:05:57.093 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.096681 | orchestrator | 19:05:57.093 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-22 19:05:57.096685 | orchestrator | 19:05:57.093 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.096690 | orchestrator | 19:05:57.093 STDOUT terraform:  + size = 20 2025-06-22 19:05:57.096696 | orchestrator | 19:05:57.093 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.096701 | orchestrator | 19:05:57.093 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.096705 | orchestrator | 19:05:57.093 STDOUT terraform:  } 2025-06-22 19:05:57.096711 | orchestrator | 19:05:57.093 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-22 19:05:57.096718 | orchestrator | 19:05:57.093 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:05:57.096721 | orchestrator | 19:05:57.093 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.096725 | orchestrator | 19:05:57.093 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.096729 | orchestrator | 19:05:57.093 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096733 | orchestrator | 19:05:57.093 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.096736 | orchestrator | 19:05:57.093 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-22 19:05:57.096740 | orchestrator | 19:05:57.093 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.096744 | orchestrator | 19:05:57.093 STDOUT terraform:  + size = 20 2025-06-22 19:05:57.096748 | orchestrator | 19:05:57.093 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.096751 | orchestrator | 19:05:57.093 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.096755 | orchestrator | 19:05:57.093 STDOUT terraform:  } 2025-06-22 19:05:57.096759 | orchestrator | 19:05:57.093 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-22 19:05:57.096762 | orchestrator | 19:05:57.093 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:05:57.096766 | orchestrator | 19:05:57.093 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.096773 | orchestrator | 19:05:57.093 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.096777 | orchestrator | 19:05:57.093 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.096781 | orchestrator | 19:05:57.093 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.096834 | orchestrator | 19:05:57.093 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-22 19:05:57.096895 | orchestrator | 19:05:57.096 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.096962 | orchestrator | 19:05:57.096 STDOUT terraform:  + size = 20 2025-06-22 19:05:57.097002 | orchestrator | 19:05:57.096 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.097038 | orchestrator | 19:05:57.097 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.097060 | orchestrator | 19:05:57.097 STDOUT terraform:  } 2025-06-22 19:05:57.097113 | orchestrator | 19:05:57.097 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-22 19:05:57.097164 | orchestrator | 19:05:57.097 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:05:57.097318 | orchestrator | 19:05:57.097 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.097362 | orchestrator | 19:05:57.097 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.097405 | orchestrator | 19:05:57.097 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.097447 | orchestrator | 19:05:57.097 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.100999 | orchestrator | 19:05:57.097 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-22 19:05:57.109332 | orchestrator | 19:05:57.101 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.109365 | orchestrator | 19:05:57.101 STDOUT terraform:  + size = 20 2025-06-22 19:05:57.109371 | orchestrator | 19:05:57.101 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.109377 | orchestrator | 19:05:57.101 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.109382 | orchestrator | 19:05:57.101 STDOUT terraform:  } 2025-06-22 19:05:57.109386 | orchestrator | 19:05:57.101 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-22 19:05:57.109391 | orchestrator | 19:05:57.101 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:05:57.109395 | orchestrator | 19:05:57.101 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.109399 | orchestrator | 19:05:57.101 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.109403 | orchestrator | 19:05:57.101 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.109407 | orchestrator | 19:05:57.101 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.109411 | orchestrator | 19:05:57.101 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-22 19:05:57.109415 | orchestrator | 19:05:57.101 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.109432 | orchestrator | 19:05:57.101 STDOUT terraform:  + size = 20 2025-06-22 19:05:57.109436 | orchestrator | 19:05:57.101 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.109440 | orchestrator | 19:05:57.101 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.109444 | orchestrator | 19:05:57.101 STDOUT terraform:  } 2025-06-22 19:05:57.109448 | orchestrator | 19:05:57.101 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-22 19:05:57.109452 | orchestrator | 19:05:57.101 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:05:57.109456 | orchestrator | 19:05:57.101 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.109459 | orchestrator | 19:05:57.101 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.109463 | orchestrator | 19:05:57.101 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.109467 | orchestrator | 19:05:57.101 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.109471 | orchestrator | 19:05:57.101 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-22 19:05:57.109475 | orchestrator | 19:05:57.101 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.109496 | orchestrator | 19:05:57.101 STDOUT terraform:  + size = 20 2025-06-22 19:05:57.109500 | orchestrator | 19:05:57.101 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.109504 | orchestrator | 19:05:57.101 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.109508 | orchestrator | 19:05:57.101 STDOUT terraform:  } 2025-06-22 19:05:57.109512 | orchestrator | 19:05:57.101 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-22 19:05:57.109516 | orchestrator | 19:05:57.102 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:05:57.109520 | orchestrator | 19:05:57.102 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.109524 | orchestrator | 19:05:57.102 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.109527 | orchestrator | 19:05:57.102 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.109532 | orchestrator | 19:05:57.102 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.109535 | orchestrator | 19:05:57.102 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-22 19:05:57.109539 | orchestrator | 19:05:57.102 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.109550 | orchestrator | 19:05:57.102 STDOUT terraform:  + size = 20 2025-06-22 19:05:57.109554 | orchestrator | 19:05:57.102 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.109558 | orchestrator | 19:05:57.102 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.109561 | orchestrator | 19:05:57.102 STDOUT terraform:  } 2025-06-22 19:05:57.109565 | orchestrator | 19:05:57.102 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-22 19:05:57.109569 | orchestrator | 19:05:57.102 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:05:57.109577 | orchestrator | 19:05:57.102 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.109581 | orchestrator | 19:05:57.102 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.109584 | orchestrator | 19:05:57.102 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.109588 | orchestrator | 19:05:57.102 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.109592 | orchestrator | 19:05:57.102 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-22 19:05:57.109596 | orchestrator | 19:05:57.102 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.109599 | orchestrator | 19:05:57.102 STDOUT terraform:  + size = 20 2025-06-22 19:05:57.109603 | orchestrator | 19:05:57.102 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.109607 | orchestrator | 19:05:57.102 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.109611 | orchestrator | 19:05:57.102 STDOUT terraform:  } 2025-06-22 19:05:57.109615 | orchestrator | 19:05:57.102 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-22 19:05:57.109618 | orchestrator | 19:05:57.102 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:05:57.109622 | orchestrator | 19:05:57.102 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:05:57.109626 | orchestrator | 19:05:57.102 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.109630 | orchestrator | 19:05:57.102 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.109633 | orchestrator | 19:05:57.102 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:05:57.109637 | orchestrator | 19:05:57.102 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-22 19:05:57.109641 | orchestrator | 19:05:57.102 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.109645 | orchestrator | 19:05:57.102 STDOUT terraform:  + size = 20 2025-06-22 19:05:57.109649 | orchestrator | 19:05:57.102 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:05:57.109652 | orchestrator | 19:05:57.102 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:05:57.109656 | orchestrator | 19:05:57.102 STDOUT terraform:  } 2025-06-22 19:05:57.109663 | orchestrator | 19:05:57.102 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-22 19:05:57.109667 | orchestrator | 19:05:57.102 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-22 19:05:57.109671 | orchestrator | 19:05:57.103 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:05:57.109675 | orchestrator | 19:05:57.103 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:05:57.109679 | orchestrator | 19:05:57.103 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:05:57.109682 | orchestrator | 19:05:57.103 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.109686 | orchestrator | 19:05:57.103 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.109694 | orchestrator | 19:05:57.103 STDOUT terraform:  + config_drive = true 2025-06-22 19:05:57.109698 | orchestrator | 19:05:57.103 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:05:57.109704 | orchestrator | 19:05:57.103 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:05:57.109708 | orchestrator | 19:05:57.103 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-22 19:05:57.109712 | orchestrator | 19:05:57.103 STDOUT terraform:  + force_delete = false 2025-06-22 19:05:57.109716 | orchestrator | 19:05:57.103 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:05:57.109719 | orchestrator | 19:05:57.103 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.109723 | orchestrator | 19:05:57.103 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.109727 | orchestrator | 19:05:57.103 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:05:57.109731 | orchestrator | 19:05:57.103 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:05:57.109734 | orchestrator | 19:05:57.103 STDOUT terraform:  + name = "testbed-manager" 2025-06-22 19:05:57.109738 | orchestrator | 19:05:57.103 STDOUT terraform:  + power_state = "active" 2025-06-22 19:05:57.109742 | orchestrator | 19:05:57.103 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.109746 | orchestrator | 19:05:57.103 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:05:57.109749 | orchestrator | 19:05:57.103 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:05:57.109753 | orchestrator | 19:05:57.103 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:05:57.109757 | orchestrator | 19:05:57.103 STDOUT terraform:  + user_data = (known after apply) 2025-06-22 19:05:57.109761 | orchestrator | 19:05:57.103 STDOUT terraform:  + block_device { 2025-06-22 19:05:57.109765 | orchestrator | 19:05:57.103 STDOUT terraform:  + boot_index = 0 2025-06-22 19:05:57.109768 | orchestrator | 19:05:57.103 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:05:57.109772 | orchestrator | 19:05:57.103 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:05:57.109776 | orchestrator | 19:05:57.103 STDOUT terraform:  + multiattach = false 2025-06-22 19:05:57.109780 | orchestrator | 19:05:57.103 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:05:57.109783 | orchestrator | 19:05:57.103 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.109787 | orchestrator | 19:05:57.103 STDOUT terraform:  } 2025-06-22 19:05:57.109791 | orchestrator | 19:05:57.103 STDOUT terraform:  + network { 2025-06-22 19:05:57.109795 | orchestrator | 19:05:57.103 STDOUT terraform:  + access_network = false 2025-06-22 19:05:57.109799 | orchestrator | 19:05:57.103 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:05:57.109802 | orchestrator | 19:05:57.103 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:05:57.109806 | orchestrator | 19:05:57.103 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:05:57.109813 | orchestrator | 19:05:57.103 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:05:57.109817 | orchestrator | 19:05:57.103 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:05:57.109821 | orchestrator | 19:05:57.103 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.109825 | orchestrator | 19:05:57.103 STDOUT terraform:  } 2025-06-22 19:05:57.109829 | orchestrator | 19:05:57.104 STDOUT terraform:  } 2025-06-22 19:05:57.109833 | orchestrator | 19:05:57.104 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-22 19:05:57.109837 | orchestrator | 19:05:57.104 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:05:57.109841 | orchestrator | 19:05:57.104 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:05:57.109847 | orchestrator | 19:05:57.104 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:05:57.109851 | orchestrator | 19:05:57.104 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:05:57.109854 | orchestrator | 19:05:57.104 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.109861 | orchestrator | 19:05:57.104 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.109865 | orchestrator | 19:05:57.104 STDOUT terraform:  + config_drive = true 2025-06-22 19:05:57.109869 | orchestrator | 19:05:57.104 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:05:57.109872 | orchestrator | 19:05:57.104 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:05:57.109876 | orchestrator | 19:05:57.104 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:05:57.109880 | orchestrator | 19:05:57.104 STDOUT terraform:  + force_delete = false 2025-06-22 19:05:57.109884 | orchestrator | 19:05:57.104 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:05:57.109887 | orchestrator | 19:05:57.104 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.109891 | orchestrator | 19:05:57.104 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.109895 | orchestrator | 19:05:57.104 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:05:57.109899 | orchestrator | 19:05:57.104 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:05:57.109902 | orchestrator | 19:05:57.104 STDOUT terraform:  + name = "testbed-node-0" 2025-06-22 19:05:57.109906 | orchestrator | 19:05:57.104 STDOUT terraform:  + power_state = "active" 2025-06-22 19:05:57.109910 | orchestrator | 19:05:57.104 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.109914 | orchestrator | 19:05:57.104 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:05:57.109917 | orchestrator | 19:05:57.104 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:05:57.109921 | orchestrator | 19:05:57.104 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:05:57.109980 | orchestrator | 19:05:57.104 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:05:57.109984 | orchestrator | 19:05:57.104 STDOUT terraform:  + block_device { 2025-06-22 19:05:57.109991 | orchestrator | 19:05:57.104 STDOUT terraform:  + boot_index = 0 2025-06-22 19:05:57.109995 | orchestrator | 19:05:57.104 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:05:57.109999 | orchestrator | 19:05:57.104 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:05:57.110002 | orchestrator | 19:05:57.104 STDOUT terraform:  + multiattach = false 2025-06-22 19:05:57.110006 | orchestrator | 19:05:57.104 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:05:57.110010 | orchestrator | 19:05:57.104 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.110030 | orchestrator | 19:05:57.104 STDOUT terraform:  } 2025-06-22 19:05:57.110034 | orchestrator | 19:05:57.104 STDOUT terraform:  + network { 2025-06-22 19:05:57.110038 | orchestrator | 19:05:57.104 STDOUT terraform:  + access_network = false 2025-06-22 19:05:57.110042 | orchestrator | 19:05:57.104 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:05:57.110046 | orchestrator | 19:05:57.104 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:05:57.110050 | orchestrator | 19:05:57.104 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:05:57.110053 | orchestrator | 19:05:57.104 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:05:57.110057 | orchestrator | 19:05:57.104 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:05:57.110061 | orchestrator | 19:05:57.105 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.110065 | orchestrator | 19:05:57.105 STDOUT terraform:  } 2025-06-22 19:05:57.110069 | orchestrator | 19:05:57.105 STDOUT terraform:  } 2025-06-22 19:05:57.110073 | orchestrator | 19:05:57.105 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-22 19:05:57.110076 | orchestrator | 19:05:57.105 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:05:57.110083 | orchestrator | 19:05:57.105 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:05:57.110087 | orchestrator | 19:05:57.105 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:05:57.110091 | orchestrator | 19:05:57.105 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:05:57.110095 | orchestrator | 19:05:57.105 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.110099 | orchestrator | 19:05:57.105 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.110102 | orchestrator | 19:05:57.105 STDOUT terraform:  + config_drive = true 2025-06-22 19:05:57.110106 | orchestrator | 19:05:57.105 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:05:57.110110 | orchestrator | 19:05:57.105 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:05:57.110114 | orchestrator | 19:05:57.105 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:05:57.110118 | orchestrator | 19:05:57.105 STDOUT terraform:  + force_delete = false 2025-06-22 19:05:57.110125 | orchestrator | 19:05:57.105 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:05:57.110132 | orchestrator | 19:05:57.105 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.110135 | orchestrator | 19:05:57.105 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.110139 | orchestrator | 19:05:57.105 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:05:57.110143 | orchestrator | 19:05:57.105 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:05:57.110148 | orchestrator | 19:05:57.105 STDOUT terraform:  + name = "testbed-node-1" 2025-06-22 19:05:57.110151 | orchestrator | 19:05:57.105 STDOUT terraform:  + power_state = "active" 2025-06-22 19:05:57.110155 | orchestrator | 19:05:57.105 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.110159 | orchestrator | 19:05:57.105 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:05:57.110163 | orchestrator | 19:05:57.105 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:05:57.110167 | orchestrator | 19:05:57.105 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:05:57.110171 | orchestrator | 19:05:57.105 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:05:57.110175 | orchestrator | 19:05:57.105 STDOUT terraform:  + block_device { 2025-06-22 19:05:57.110178 | orchestrator | 19:05:57.105 STDOUT terraform:  + boot_index = 0 2025-06-22 19:05:57.110185 | orchestrator | 19:05:57.105 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:05:57.110188 | orchestrator | 19:05:57.105 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:05:57.110192 | orchestrator | 19:05:57.105 STDOUT terraform:  + multiattach = false 2025-06-22 19:05:57.110196 | orchestrator | 19:05:57.105 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:05:57.110199 | orchestrator | 19:05:57.105 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.110203 | orchestrator | 19:05:57.105 STDOUT terraform:  } 2025-06-22 19:05:57.110207 | orchestrator | 19:05:57.105 STDOUT terraform:  + network { 2025-06-22 19:05:57.110211 | orchestrator | 19:05:57.105 STDOUT terraform:  + access_network = false 2025-06-22 19:05:57.110214 | orchestrator | 19:05:57.105 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:05:57.110218 | orchestrator | 19:05:57.105 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:05:57.110222 | orchestrator | 19:05:57.105 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:05:57.114044 | orchestrator | 19:05:57.106 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:05:57.114063 | orchestrator | 19:05:57.110 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:05:57.114067 | orchestrator | 19:05:57.110 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.114071 | orchestrator | 19:05:57.110 STDOUT terraform:  } 2025-06-22 19:05:57.114075 | orchestrator | 19:05:57.110 STDOUT terraform:  } 2025-06-22 19:05:57.114079 | orchestrator | 19:05:57.110 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-22 19:05:57.114083 | orchestrator | 19:05:57.110 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:05:57.114093 | orchestrator | 19:05:57.110 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:05:57.114097 | orchestrator | 19:05:57.110 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:05:57.114100 | orchestrator | 19:05:57.110 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:05:57.114104 | orchestrator | 19:05:57.110 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.114108 | orchestrator | 19:05:57.110 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.114112 | orchestrator | 19:05:57.110 STDOUT terraform:  + config_drive = true 2025-06-22 19:05:57.114116 | orchestrator | 19:05:57.110 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:05:57.114119 | orchestrator | 19:05:57.110 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:05:57.114123 | orchestrator | 19:05:57.110 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:05:57.114127 | orchestrator | 19:05:57.110 STDOUT terraform:  + force_delete = false 2025-06-22 19:05:57.114130 | orchestrator | 19:05:57.110 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:05:57.114134 | orchestrator | 19:05:57.110 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.114138 | orchestrator | 19:05:57.110 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.114142 | orchestrator | 19:05:57.110 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:05:57.114145 | orchestrator | 19:05:57.110 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:05:57.114149 | orchestrator | 19:05:57.110 STDOUT terraform:  + name = "testbed-node-2" 2025-06-22 19:05:57.114153 | orchestrator | 19:05:57.110 STDOUT terraform:  + power_state = "active" 2025-06-22 19:05:57.114156 | orchestrator | 19:05:57.110 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.114160 | orchestrator | 19:05:57.111 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:05:57.114164 | orchestrator | 19:05:57.111 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:05:57.114168 | orchestrator | 19:05:57.111 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:05:57.114171 | orchestrator | 19:05:57.111 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:05:57.114176 | orchestrator | 19:05:57.111 STDOUT terraform:  + block_device { 2025-06-22 19:05:57.114180 | orchestrator | 19:05:57.111 STDOUT terraform:  + boot_index = 0 2025-06-22 19:05:57.114184 | orchestrator | 19:05:57.111 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:05:57.114191 | orchestrator | 19:05:57.111 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:05:57.114195 | orchestrator | 19:05:57.111 STDOUT terraform:  + multiattach = false 2025-06-22 19:05:57.114199 | orchestrator | 19:05:57.111 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:05:57.114203 | orchestrator | 19:05:57.111 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.114210 | orchestrator | 19:05:57.111 STDOUT terraform:  } 2025-06-22 19:05:57.114214 | orchestrator | 19:05:57.111 STDOUT terraform:  + network { 2025-06-22 19:05:57.114222 | orchestrator | 19:05:57.111 STDOUT terraform:  + access_network = false 2025-06-22 19:05:57.114226 | orchestrator | 19:05:57.111 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:05:57.114230 | orchestrator | 19:05:57.111 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:05:57.114234 | orchestrator | 19:05:57.111 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:05:57.114237 | orchestrator | 19:05:57.111 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:05:57.114241 | orchestrator | 19:05:57.111 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:05:57.114245 | orchestrator | 19:05:57.111 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.114249 | orchestrator | 19:05:57.111 STDOUT terraform:  } 2025-06-22 19:05:57.114255 | orchestrator | 19:05:57.111 STDOUT terraform:  } 2025-06-22 19:05:57.114259 | orchestrator | 19:05:57.111 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-22 19:05:57.114263 | orchestrator | 19:05:57.111 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:05:57.114267 | orchestrator | 19:05:57.111 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:05:57.114271 | orchestrator | 19:05:57.111 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:05:57.114274 | orchestrator | 19:05:57.111 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:05:57.114278 | orchestrator | 19:05:57.111 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.114282 | orchestrator | 19:05:57.111 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.114286 | orchestrator | 19:05:57.111 STDOUT terraform:  + config_drive = true 2025-06-22 19:05:57.114290 | orchestrator | 19:05:57.111 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:05:57.114293 | orchestrator | 19:05:57.111 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:05:57.114297 | orchestrator | 19:05:57.111 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:05:57.114301 | orchestrator | 19:05:57.111 STDOUT terraform:  + force_delete = false 2025-06-22 19:05:57.114304 | orchestrator | 19:05:57.111 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:05:57.114308 | orchestrator | 19:05:57.111 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.114312 | orchestrator | 19:05:57.111 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.114315 | orchestrator | 19:05:57.111 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:05:57.114319 | orchestrator | 19:05:57.111 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:05:57.114323 | orchestrator | 19:05:57.111 STDOUT terraform:  + name = "testbed-node-3" 2025-06-22 19:05:57.114326 | orchestrator | 19:05:57.112 STDOUT terraform:  + power_state = "active" 2025-06-22 19:05:57.114336 | orchestrator | 19:05:57.112 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.114340 | orchestrator | 19:05:57.112 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:05:57.114344 | orchestrator | 19:05:57.112 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:05:57.114347 | orchestrator | 19:05:57.112 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:05:57.114351 | orchestrator | 19:05:57.112 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:05:57.114355 | orchestrator | 19:05:57.112 STDOUT terraform:  + block_device { 2025-06-22 19:05:57.114359 | orchestrator | 19:05:57.112 STDOUT terraform:  + boot_index = 0 2025-06-22 19:05:57.114362 | orchestrator | 19:05:57.112 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:05:57.114366 | orchestrator | 19:05:57.112 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:05:57.114372 | orchestrator | 19:05:57.112 STDOUT terraform:  + multiattach = false 2025-06-22 19:05:57.114376 | orchestrator | 19:05:57.112 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:05:57.114379 | orchestrator | 19:05:57.112 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.114383 | orchestrator | 19:05:57.112 STDOUT terraform:  } 2025-06-22 19:05:57.114387 | orchestrator | 19:05:57.112 STDOUT terraform:  + network { 2025-06-22 19:05:57.114391 | orchestrator | 19:05:57.112 STDOUT terraform:  + access_network = false 2025-06-22 19:05:57.114394 | orchestrator | 19:05:57.112 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:05:57.114398 | orchestrator | 19:05:57.112 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:05:57.114402 | orchestrator | 19:05:57.112 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:05:57.114406 | orchestrator | 19:05:57.112 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:05:57.114409 | orchestrator | 19:05:57.112 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:05:57.114413 | orchestrator | 19:05:57.112 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.114417 | orchestrator | 19:05:57.112 STDOUT terraform:  } 2025-06-22 19:05:57.114420 | orchestrator | 19:05:57.112 STDOUT terraform:  } 2025-06-22 19:05:57.114424 | orchestrator | 19:05:57.112 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-22 19:05:57.114428 | orchestrator | 19:05:57.112 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:05:57.114432 | orchestrator | 19:05:57.112 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:05:57.114435 | orchestrator | 19:05:57.112 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:05:57.114439 | orchestrator | 19:05:57.112 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:05:57.114443 | orchestrator | 19:05:57.112 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.114446 | orchestrator | 19:05:57.112 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.114453 | orchestrator | 19:05:57.112 STDOUT terraform:  + config_drive = true 2025-06-22 19:05:57.114460 | orchestrator | 19:05:57.112 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:05:57.114463 | orchestrator | 19:05:57.112 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:05:57.114467 | orchestrator | 19:05:57.112 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:05:57.114471 | orchestrator | 19:05:57.112 STDOUT terraform:  + force_delete = false 2025-06-22 19:05:57.114475 | orchestrator | 19:05:57.112 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:05:57.114478 | orchestrator | 19:05:57.112 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.114482 | orchestrator | 19:05:57.112 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.114488 | orchestrator | 19:05:57.112 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:05:57.114492 | orchestrator | 19:05:57.113 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:05:57.114496 | orchestrator | 19:05:57.113 STDOUT terraform:  + name = "testbed-node-4" 2025-06-22 19:05:57.114500 | orchestrator | 19:05:57.113 STDOUT terraform:  + power_state = "active" 2025-06-22 19:05:57.114503 | orchestrator | 19:05:57.113 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.114507 | orchestrator | 19:05:57.113 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:05:57.114511 | orchestrator | 19:05:57.113 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:05:57.114514 | orchestrator | 19:05:57.113 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:05:57.114518 | orchestrator | 19:05:57.113 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:05:57.114524 | orchestrator | 19:05:57.113 STDOUT terraform:  + block_device { 2025-06-22 19:05:57.114528 | orchestrator | 19:05:57.113 STDOUT terraform:  + boot_index = 0 2025-06-22 19:05:57.114531 | orchestrator | 19:05:57.113 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:05:57.114535 | orchestrator | 19:05:57.113 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:05:57.114539 | orchestrator | 19:05:57.113 STDOUT terraform:  + multiattach = false 2025-06-22 19:05:57.114542 | orchestrator | 19:05:57.113 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:05:57.114546 | orchestrator | 19:05:57.113 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.114550 | orchestrator | 19:05:57.113 STDOUT terraform:  } 2025-06-22 19:05:57.114554 | orchestrator | 19:05:57.113 STDOUT terraform:  + network { 2025-06-22 19:05:57.114557 | orchestrator | 19:05:57.113 STDOUT terraform:  + access_network = false 2025-06-22 19:05:57.114561 | orchestrator | 19:05:57.113 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:05:57.114565 | orchestrator | 19:05:57.113 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:05:57.114569 | orchestrator | 19:05:57.113 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:05:57.114576 | orchestrator | 19:05:57.113 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:05:57.114580 | orchestrator | 19:05:57.113 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:05:57.114583 | orchestrator | 19:05:57.113 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.114587 | orchestrator | 19:05:57.113 STDOUT terraform:  } 2025-06-22 19:05:57.114591 | orchestrator | 19:05:57.113 STDOUT terraform:  } 2025-06-22 19:05:57.114594 | orchestrator | 19:05:57.113 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-22 19:05:57.114598 | orchestrator | 19:05:57.113 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:05:57.114602 | orchestrator | 19:05:57.113 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:05:57.114606 | orchestrator | 19:05:57.113 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:05:57.114610 | orchestrator | 19:05:57.113 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:05:57.114613 | orchestrator | 19:05:57.113 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.114617 | orchestrator | 19:05:57.113 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:05:57.114621 | orchestrator | 19:05:57.113 STDOUT terraform:  + config_drive = true 2025-06-22 19:05:57.114624 | orchestrator | 19:05:57.113 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:05:57.114628 | orchestrator | 19:05:57.113 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:05:57.114632 | orchestrator | 19:05:57.113 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:05:57.114636 | orchestrator | 19:05:57.113 STDOUT terraform:  + force_delete = false 2025-06-22 19:05:57.114639 | orchestrator | 19:05:57.113 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:05:57.114643 | orchestrator | 19:05:57.113 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.114667 | orchestrator | 19:05:57.113 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:05:57.114714 | orchestrator | 19:05:57.114 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:05:57.114749 | orchestrator | 19:05:57.114 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:05:57.114787 | orchestrator | 19:05:57.114 STDOUT terraform:  + name = "testbed-node-5" 2025-06-22 19:05:57.114821 | orchestrator | 19:05:57.114 STDOUT terraform:  + power_state = "active" 2025-06-22 19:05:57.114863 | orchestrator | 19:05:57.114 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.114907 | orchestrator | 19:05:57.114 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:05:57.114976 | orchestrator | 19:05:57.114 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:05:57.115022 | orchestrator | 19:05:57.114 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:05:57.115082 | orchestrator | 19:05:57.115 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:05:57.115107 | orchestrator | 19:05:57.115 STDOUT terraform:  + block_device { 2025-06-22 19:05:57.115148 | orchestrator | 19:05:57.115 STDOUT terraform:  + boot_index = 0 2025-06-22 19:05:57.115183 | orchestrator | 19:05:57.115 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:05:57.115221 | orchestrator | 19:05:57.115 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:05:57.115256 | orchestrator | 19:05:57.115 STDOUT terraform:  + multiattach = false 2025-06-22 19:05:57.115294 | orchestrator | 19:05:57.115 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:05:57.115342 | orchestrator | 19:05:57.115 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.115366 | orchestrator | 19:05:57.115 STDOUT terraform:  } 2025-06-22 19:05:57.115388 | orchestrator | 19:05:57.115 STDOUT terraform:  + network { 2025-06-22 19:05:57.115415 | orchestrator | 19:05:57.115 STDOUT terraform:  + access_network = false 2025-06-22 19:05:57.115454 | orchestrator | 19:05:57.115 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:05:57.115491 | orchestrator | 19:05:57.115 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:05:57.115551 | orchestrator | 19:05:57.115 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:05:57.115593 | orchestrator | 19:05:57.115 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:05:57.115636 | orchestrator | 19:05:57.115 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:05:57.115676 | orchestrator | 19:05:57.115 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:05:57.115697 | orchestrator | 19:05:57.115 STDOUT terraform:  } 2025-06-22 19:05:57.115718 | orchestrator | 19:05:57.115 STDOUT terraform:  } 2025-06-22 19:05:57.115792 | orchestrator | 19:05:57.115 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-22 19:05:57.115835 | orchestrator | 19:05:57.115 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-22 19:05:57.115874 | orchestrator | 19:05:57.115 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-22 19:05:57.115914 | orchestrator | 19:05:57.115 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.115965 | orchestrator | 19:05:57.115 STDOUT terraform:  + name = "testbed" 2025-06-22 19:05:57.115998 | orchestrator | 19:05:57.115 STDOUT terraform:  + private_key = (sensitive value) 2025-06-22 19:05:57.116037 | orchestrator | 19:05:57.116 STDOUT terraform:  + public_key = (known after apply) 2025-06-22 19:05:57.116072 | orchestrator | 19:05:57.116 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.116112 | orchestrator | 19:05:57.116 STDOUT terraform:  + user_id = (known after apply) 2025-06-22 19:05:57.116133 | orchestrator | 19:05:57.116 STDOUT terraform:  } 2025-06-22 19:05:57.116191 | orchestrator | 19:05:57.116 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-22 19:05:57.116249 | orchestrator | 19:05:57.116 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:05:57.116284 | orchestrator | 19:05:57.116 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:05:57.116323 | orchestrator | 19:05:57.116 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.116363 | orchestrator | 19:05:57.116 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:05:57.116399 | orchestrator | 19:05:57.116 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.116436 | orchestrator | 19:05:57.116 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:05:57.116457 | orchestrator | 19:05:57.116 STDOUT terraform:  } 2025-06-22 19:05:57.116512 | orchestrator | 19:05:57.116 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-22 19:05:57.116570 | orchestrator | 19:05:57.116 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:05:57.116605 | orchestrator | 19:05:57.116 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:05:57.116640 | orchestrator | 19:05:57.116 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.116674 | orchestrator | 19:05:57.116 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:05:57.116710 | orchestrator | 19:05:57.116 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.116744 | orchestrator | 19:05:57.116 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:05:57.116819 | orchestrator | 19:05:57.116 STDOUT terraform:  } 2025-06-22 19:05:57.116877 | orchestrator | 19:05:57.116 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-22 19:05:57.116952 | orchestrator | 19:05:57.116 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:05:57.116988 | orchestrator | 19:05:57.116 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:05:57.117031 | orchestrator | 19:05:57.117 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.117070 | orchestrator | 19:05:57.117 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:05:57.117105 | orchestrator | 19:05:57.117 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.117139 | orchestrator | 19:05:57.117 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:05:57.117162 | orchestrator | 19:05:57.117 STDOUT terraform:  } 2025-06-22 19:05:57.117229 | orchestrator | 19:05:57.117 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-22 19:05:57.117284 | orchestrator | 19:05:57.117 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:05:57.117325 | orchestrator | 19:05:57.117 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:05:57.117364 | orchestrator | 19:05:57.117 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.117406 | orchestrator | 19:05:57.117 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:05:57.117442 | orchestrator | 19:05:57.117 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.117476 | orchestrator | 19:05:57.117 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:05:57.117496 | orchestrator | 19:05:57.117 STDOUT terraform:  } 2025-06-22 19:05:57.117553 | orchestrator | 19:05:57.117 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-22 19:05:57.117611 | orchestrator | 19:05:57.117 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:05:57.117646 | orchestrator | 19:05:57.117 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:05:57.117685 | orchestrator | 19:05:57.117 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.117725 | orchestrator | 19:05:57.117 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:05:57.117764 | orchestrator | 19:05:57.117 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.117798 | orchestrator | 19:05:57.117 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:05:57.117819 | orchestrator | 19:05:57.117 STDOUT terraform:  } 2025-06-22 19:05:57.117875 | orchestrator | 19:05:57.117 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-22 19:05:57.117941 | orchestrator | 19:05:57.117 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:05:57.117980 | orchestrator | 19:05:57.117 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:05:57.118031 | orchestrator | 19:05:57.117 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.118068 | orchestrator | 19:05:57.118 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:05:57.118102 | orchestrator | 19:05:57.118 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.118141 | orchestrator | 19:05:57.118 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:05:57.118169 | orchestrator | 19:05:57.118 STDOUT terraform:  } 2025-06-22 19:05:57.118228 | orchestrator | 19:05:57.118 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-22 19:05:57.118283 | orchestrator | 19:05:57.118 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:05:57.118318 | orchestrator | 19:05:57.118 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:05:57.118353 | orchestrator | 19:05:57.118 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.118393 | orchestrator | 19:05:57.118 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:05:57.118428 | orchestrator | 19:05:57.118 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.118467 | orchestrator | 19:05:57.118 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:05:57.118489 | orchestrator | 19:05:57.118 STDOUT terraform:  } 2025-06-22 19:05:57.118546 | orchestrator | 19:05:57.118 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-22 19:05:57.118608 | orchestrator | 19:05:57.118 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:05:57.118643 | orchestrator | 19:05:57.118 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:05:57.118679 | orchestrator | 19:05:57.118 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.118714 | orchestrator | 19:05:57.118 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:05:57.118752 | orchestrator | 19:05:57.118 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.118791 | orchestrator | 19:05:57.118 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:05:57.118812 | orchestrator | 19:05:57.118 STDOUT terraform:  } 2025-06-22 19:05:57.118867 | orchestrator | 19:05:57.118 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-22 19:05:57.118958 | orchestrator | 19:05:57.118 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:05:57.119001 | orchestrator | 19:05:57.118 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:05:57.119038 | orchestrator | 19:05:57.119 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.119081 | orchestrator | 19:05:57.119 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:05:57.119123 | orchestrator | 19:05:57.119 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.119159 | orchestrator | 19:05:57.119 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:05:57.119180 | orchestrator | 19:05:57.119 STDOUT terraform:  } 2025-06-22 19:05:57.119252 | orchestrator | 19:05:57.119 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-22 19:05:57.119315 | orchestrator | 19:05:57.119 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-22 19:05:57.119351 | orchestrator | 19:05:57.119 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-22 19:05:57.119390 | orchestrator | 19:05:57.119 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-22 19:05:57.119427 | orchestrator | 19:05:57.119 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.119462 | orchestrator | 19:05:57.119 STDOUT terraform:  + port_id = (known after apply) 2025-06-22 19:05:57.119503 | orchestrator | 19:05:57.119 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.119528 | orchestrator | 19:05:57.119 STDOUT terraform:  } 2025-06-22 19:05:57.119591 | orchestrator | 19:05:57.119 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-22 19:05:57.119645 | orchestrator | 19:05:57.119 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-22 19:05:57.119677 | orchestrator | 19:05:57.119 STDOUT terraform:  + address = (known after apply) 2025-06-22 19:05:57.119712 | orchestrator | 19:05:57.119 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.119743 | orchestrator | 19:05:57.119 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-22 19:05:57.119778 | orchestrator | 19:05:57.119 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:05:57.119810 | orchestrator | 19:05:57.119 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-22 19:05:57.119846 | orchestrator | 19:05:57.119 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.119878 | orchestrator | 19:05:57.119 STDOUT terraform:  + pool = "public" 2025-06-22 19:05:57.119919 | orchestrator | 19:05:57.119 STDOUT terraform:  + port_id = (known after apply) 2025-06-22 19:05:57.119965 | orchestrator | 19:05:57.119 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.120005 | orchestrator | 19:05:57.119 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:05:57.120042 | orchestrator | 19:05:57.120 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.120063 | orchestrator | 19:05:57.120 STDOUT terraform:  } 2025-06-22 19:05:57.120113 | orchestrator | 19:05:57.120 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-22 19:05:57.120165 | orchestrator | 19:05:57.120 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-22 19:05:57.120210 | orchestrator | 19:05:57.120 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:05:57.120254 | orchestrator | 19:05:57.120 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.120284 | orchestrator | 19:05:57.120 STDOUT terraform:  + availability_zone_hints = [ 2025-06-22 19:05:57.120312 | orchestrator | 19:05:57.120 STDOUT terraform:  + "nova", 2025-06-22 19:05:57.120337 | orchestrator | 19:05:57.120 STDOUT terraform:  ] 2025-06-22 19:05:57.120383 | orchestrator | 19:05:57.120 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-22 19:05:57.120427 | orchestrator | 19:05:57.120 STDOUT terraform:  + external = (known after apply) 2025-06-22 19:05:57.120470 | orchestrator | 19:05:57.120 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.120514 | orchestrator | 19:05:57.120 STDOUT terraform:  + mtu = (known after apply) 2025-06-22 19:05:57.120563 | orchestrator | 19:05:57.120 STDOUT terraform:  + name = "net-testbed-management" 2025-06-22 19:05:57.120606 | orchestrator | 19:05:57.120 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:05:57.120650 | orchestrator | 19:05:57.120 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:05:57.120699 | orchestrator | 19:05:57.120 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.120748 | orchestrator | 19:05:57.120 STDOUT terraform:  + shared = (known after apply) 2025-06-22 19:05:57.120798 | orchestrator | 19:05:57.120 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.120846 | orchestrator | 19:05:57.120 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-22 19:05:57.120877 | orchestrator | 19:05:57.120 STDOUT terraform:  + segments (known after apply) 2025-06-22 19:05:57.120907 | orchestrator | 19:05:57.120 STDOUT terraform:  } 2025-06-22 19:05:57.120988 | orchestrator | 19:05:57.120 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-22 19:05:57.121043 | orchestrator | 19:05:57.120 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-22 19:05:57.121090 | orchestrator | 19:05:57.121 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:05:57.121135 | orchestrator | 19:05:57.121 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:05:57.121181 | orchestrator | 19:05:57.121 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:05:57.121224 | orchestrator | 19:05:57.121 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.121276 | orchestrator | 19:05:57.121 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:05:57.121323 | orchestrator | 19:05:57.121 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:05:57.121366 | orchestrator | 19:05:57.121 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:05:57.121413 | orchestrator | 19:05:57.121 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:05:57.121457 | orchestrator | 19:05:57.121 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.121506 | orchestrator | 19:05:57.121 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:05:57.121549 | orchestrator | 19:05:57.121 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:05:57.121594 | orchestrator | 19:05:57.121 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:05:57.121641 | orchestrator | 19:05:57.121 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:05:57.121685 | orchestrator | 19:05:57.121 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.121727 | orchestrator | 19:05:57.121 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:05:57.121779 | orchestrator | 19:05:57.121 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.121808 | orchestrator | 19:05:57.121 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.121845 | orchestrator | 19:05:57.121 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:05:57.121867 | orchestrator | 19:05:57.121 STDOUT terraform:  } 2025-06-22 19:05:57.121894 | orchestrator | 19:05:57.121 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.121954 | orchestrator | 19:05:57.121 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:05:57.121978 | orchestrator | 19:05:57.121 STDOUT terraform:  } 2025-06-22 19:05:57.122008 | orchestrator | 19:05:57.121 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:05:57.123846 | orchestrator | 19:05:57.123 STDOUT terraform:  + fixed_ip { 2025-06-22 19:05:57.123902 | orchestrator | 19:05:57.123 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-22 19:05:57.123990 | orchestrator | 19:05:57.123 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:05:57.124016 | orchestrator | 19:05:57.124 STDOUT terraform:  } 2025-06-22 19:05:57.124039 | orchestrator | 19:05:57.124 STDOUT terraform:  } 2025-06-22 19:05:57.124093 | orchestrator | 19:05:57.124 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-22 19:05:57.124155 | orchestrator | 19:05:57.124 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:05:57.124199 | orchestrator | 19:05:57.124 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:05:57.124244 | orchestrator | 19:05:57.124 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:05:57.124285 | orchestrator | 19:05:57.124 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:05:57.124327 | orchestrator | 19:05:57.124 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.124369 | orchestrator | 19:05:57.124 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:05:57.124424 | orchestrator | 19:05:57.124 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:05:57.124472 | orchestrator | 19:05:57.124 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:05:57.124516 | orchestrator | 19:05:57.124 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:05:57.124560 | orchestrator | 19:05:57.124 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.124609 | orchestrator | 19:05:57.124 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:05:57.124651 | orchestrator | 19:05:57.124 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:05:57.124696 | orchestrator | 19:05:57.124 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:05:57.124742 | orchestrator | 19:05:57.124 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:05:57.124785 | orchestrator | 19:05:57.124 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.124826 | orchestrator | 19:05:57.124 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:05:57.124868 | orchestrator | 19:05:57.124 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.124904 | orchestrator | 19:05:57.124 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.124960 | orchestrator | 19:05:57.124 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:05:57.124988 | orchestrator | 19:05:57.124 STDOUT terraform:  } 2025-06-22 19:05:57.125015 | orchestrator | 19:05:57.124 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.125053 | orchestrator | 19:05:57.125 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:05:57.125077 | orchestrator | 19:05:57.125 STDOUT terraform:  } 2025-06-22 19:05:57.125103 | orchestrator | 19:05:57.125 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.125138 | orchestrator | 19:05:57.125 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:05:57.125160 | orchestrator | 19:05:57.125 STDOUT terraform:  } 2025-06-22 19:05:57.125190 | orchestrator | 19:05:57.125 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.125244 | orchestrator | 19:05:57.125 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:05:57.125284 | orchestrator | 19:05:57.125 STDOUT terraform:  } 2025-06-22 19:05:57.125325 | orchestrator | 19:05:57.125 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:05:57.125355 | orchestrator | 19:05:57.125 STDOUT terraform:  + fixed_ip { 2025-06-22 19:05:57.125406 | orchestrator | 19:05:57.125 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-22 19:05:57.125463 | orchestrator | 19:05:57.125 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:05:57.125497 | orchestrator | 19:05:57.125 STDOUT terraform:  } 2025-06-22 19:05:57.125530 | orchestrator | 19:05:57.125 STDOUT terraform:  } 2025-06-22 19:05:57.125603 | orchestrator | 19:05:57.125 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-22 19:05:57.125665 | orchestrator | 19:05:57.125 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:05:57.125716 | orchestrator | 19:05:57.125 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:05:57.125765 | orchestrator | 19:05:57.125 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:05:57.125813 | orchestrator | 19:05:57.125 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:05:57.125857 | orchestrator | 19:05:57.125 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.125900 | orchestrator | 19:05:57.125 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:05:57.125976 | orchestrator | 19:05:57.125 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:05:57.126036 | orchestrator | 19:05:57.125 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:05:57.126082 | orchestrator | 19:05:57.126 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:05:57.126126 | orchestrator | 19:05:57.126 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.126172 | orchestrator | 19:05:57.126 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:05:57.126221 | orchestrator | 19:05:57.126 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:05:57.126263 | orchestrator | 19:05:57.126 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:05:57.126315 | orchestrator | 19:05:57.126 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:05:57.126363 | orchestrator | 19:05:57.126 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.126410 | orchestrator | 19:05:57.126 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:05:57.126455 | orchestrator | 19:05:57.126 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.126483 | orchestrator | 19:05:57.126 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.126518 | orchestrator | 19:05:57.126 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:05:57.126542 | orchestrator | 19:05:57.126 STDOUT terraform:  } 2025-06-22 19:05:57.126569 | orchestrator | 19:05:57.126 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.126604 | orchestrator | 19:05:57.126 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:05:57.126625 | orchestrator | 19:05:57.126 STDOUT terraform:  } 2025-06-22 19:05:57.126659 | orchestrator | 19:05:57.126 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.126694 | orchestrator | 19:05:57.126 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:05:57.126714 | orchestrator | 19:05:57.126 STDOUT terraform:  } 2025-06-22 19:05:57.126749 | orchestrator | 19:05:57.126 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.126787 | orchestrator | 19:05:57.126 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:05:57.126809 | orchestrator | 19:05:57.126 STDOUT terraform:  } 2025-06-22 19:05:57.126838 | orchestrator | 19:05:57.126 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:05:57.126859 | orchestrator | 19:05:57.126 STDOUT terraform:  + fixed_ip { 2025-06-22 19:05:57.126896 | orchestrator | 19:05:57.126 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-22 19:05:57.126948 | orchestrator | 19:05:57.126 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:05:57.126973 | orchestrator | 19:05:57.126 STDOUT terraform:  } 2025-06-22 19:05:57.127001 | orchestrator | 19:05:57.126 STDOUT terraform:  } 2025-06-22 19:05:57.127056 | orchestrator | 19:05:57.127 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-22 19:05:57.127107 | orchestrator | 19:05:57.127 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:05:57.127148 | orchestrator | 19:05:57.127 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:05:57.127190 | orchestrator | 19:05:57.127 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:05:57.127240 | orchestrator | 19:05:57.127 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:05:57.127283 | orchestrator | 19:05:57.127 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.127337 | orchestrator | 19:05:57.127 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:05:57.127380 | orchestrator | 19:05:57.127 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:05:57.127430 | orchestrator | 19:05:57.127 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:05:57.127481 | orchestrator | 19:05:57.127 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:05:57.127524 | orchestrator | 19:05:57.127 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.127565 | orchestrator | 19:05:57.127 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:05:57.127610 | orchestrator | 19:05:57.127 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:05:57.127650 | orchestrator | 19:05:57.127 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:05:57.127692 | orchestrator | 19:05:57.127 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:05:57.127738 | orchestrator | 19:05:57.127 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.127779 | orchestrator | 19:05:57.127 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:05:57.127820 | orchestrator | 19:05:57.127 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.127846 | orchestrator | 19:05:57.127 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.127890 | orchestrator | 19:05:57.127 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:05:57.127920 | orchestrator | 19:05:57.127 STDOUT terraform:  } 2025-06-22 19:05:57.127969 | orchestrator | 19:05:57.127 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.128008 | orchestrator | 19:05:57.127 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:05:57.128034 | orchestrator | 19:05:57.128 STDOUT terraform:  } 2025-06-22 19:05:57.128060 | orchestrator | 19:05:57.128 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.128093 | orchestrator | 19:05:57.128 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:05:57.128142 | orchestrator | 19:05:57.128 STDOUT terraform:  } 2025-06-22 19:05:57.128169 | orchestrator | 19:05:57.128 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.128202 | orchestrator | 19:05:57.128 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:05:57.128221 | orchestrator | 19:05:57.128 STDOUT terraform:  } 2025-06-22 19:05:57.128259 | orchestrator | 19:05:57.128 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:05:57.128283 | orchestrator | 19:05:57.128 STDOUT terraform:  + fixed_ip { 2025-06-22 19:05:57.128322 | orchestrator | 19:05:57.128 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-22 19:05:57.128357 | orchestrator | 19:05:57.128 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:05:57.128376 | orchestrator | 19:05:57.128 STDOUT terraform:  } 2025-06-22 19:05:57.128396 | orchestrator | 19:05:57.128 STDOUT terraform:  } 2025-06-22 19:05:57.128450 | orchestrator | 19:05:57.128 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-22 19:05:57.128500 | orchestrator | 19:05:57.128 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:05:57.128544 | orchestrator | 19:05:57.128 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:05:57.128585 | orchestrator | 19:05:57.128 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:05:57.128629 | orchestrator | 19:05:57.128 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:05:57.128677 | orchestrator | 19:05:57.128 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.128718 | orchestrator | 19:05:57.128 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:05:57.128764 | orchestrator | 19:05:57.128 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:05:57.128811 | orchestrator | 19:05:57.128 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:05:57.128854 | orchestrator | 19:05:57.128 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:05:57.128897 | orchestrator | 19:05:57.128 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.128953 | orchestrator | 19:05:57.128 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:05:57.129001 | orchestrator | 19:05:57.128 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:05:57.129041 | orchestrator | 19:05:57.129 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:05:57.129094 | orchestrator | 19:05:57.129 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:05:57.129148 | orchestrator | 19:05:57.129 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.129193 | orchestrator | 19:05:57.129 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:05:57.129235 | orchestrator | 19:05:57.129 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.129261 | orchestrator | 19:05:57.129 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.129295 | orchestrator | 19:05:57.129 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:05:57.129321 | orchestrator | 19:05:57.129 STDOUT terraform:  } 2025-06-22 19:05:57.129346 | orchestrator | 19:05:57.129 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.129385 | orchestrator | 19:05:57.129 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:05:57.129410 | orchestrator | 19:05:57.129 STDOUT terraform:  } 2025-06-22 19:05:57.129443 | orchestrator | 19:05:57.129 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.129482 | orchestrator | 19:05:57.129 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:05:57.129508 | orchestrator | 19:05:57.129 STDOUT terraform:  } 2025-06-22 19:05:57.129534 | orchestrator | 19:05:57.129 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.129571 | orchestrator | 19:05:57.129 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:05:57.129591 | orchestrator | 19:05:57.129 STDOUT terraform:  } 2025-06-22 19:05:57.129621 | orchestrator | 19:05:57.129 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:05:57.129646 | orchestrator | 19:05:57.129 STDOUT terraform:  + fixed_ip { 2025-06-22 19:05:57.129677 | orchestrator | 19:05:57.129 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-22 19:05:57.129712 | orchestrator | 19:05:57.129 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:05:57.129732 | orchestrator | 19:05:57.129 STDOUT terraform:  } 2025-06-22 19:05:57.129756 | orchestrator | 19:05:57.129 STDOUT terraform:  } 2025-06-22 19:05:57.129816 | orchestrator | 19:05:57.129 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-22 19:05:57.129869 | orchestrator | 19:05:57.129 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:05:57.129911 | orchestrator | 19:05:57.129 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:05:57.129985 | orchestrator | 19:05:57.129 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:05:57.130112 | orchestrator | 19:05:57.129 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:05:57.130171 | orchestrator | 19:05:57.130 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.130213 | orchestrator | 19:05:57.130 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:05:57.130256 | orchestrator | 19:05:57.130 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:05:57.130297 | orchestrator | 19:05:57.130 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:05:57.130342 | orchestrator | 19:05:57.130 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:05:57.130384 | orchestrator | 19:05:57.130 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.130431 | orchestrator | 19:05:57.130 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:05:57.130474 | orchestrator | 19:05:57.130 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:05:57.130516 | orchestrator | 19:05:57.130 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:05:57.130565 | orchestrator | 19:05:57.130 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:05:57.130606 | orchestrator | 19:05:57.130 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.130656 | orchestrator | 19:05:57.130 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:05:57.130697 | orchestrator | 19:05:57.130 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.130725 | orchestrator | 19:05:57.130 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.130765 | orchestrator | 19:05:57.130 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:05:57.130785 | orchestrator | 19:05:57.130 STDOUT terraform:  } 2025-06-22 19:05:57.130815 | orchestrator | 19:05:57.130 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.130852 | orchestrator | 19:05:57.130 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:05:57.130871 | orchestrator | 19:05:57.130 STDOUT terraform:  } 2025-06-22 19:05:57.130896 | orchestrator | 19:05:57.130 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.130948 | orchestrator | 19:05:57.130 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:05:57.130970 | orchestrator | 19:05:57.130 STDOUT terraform:  } 2025-06-22 19:05:57.131000 | orchestrator | 19:05:57.130 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.131034 | orchestrator | 19:05:57.131 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:05:57.131055 | orchestrator | 19:05:57.131 STDOUT terraform:  } 2025-06-22 19:05:57.131084 | orchestrator | 19:05:57.131 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:05:57.131105 | orchestrator | 19:05:57.131 STDOUT terraform:  + fixed_ip { 2025-06-22 19:05:57.131144 | orchestrator | 19:05:57.131 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-22 19:05:57.131180 | orchestrator | 19:05:57.131 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:05:57.131200 | orchestrator | 19:05:57.131 STDOUT terraform:  } 2025-06-22 19:05:57.131220 | orchestrator | 19:05:57.131 STDOUT terraform:  } 2025-06-22 19:05:57.131271 | orchestrator | 19:05:57.131 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-22 19:05:57.131323 | orchestrator | 19:05:57.131 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:05:57.131387 | orchestrator | 19:05:57.131 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:05:57.131432 | orchestrator | 19:05:57.131 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:05:57.131476 | orchestrator | 19:05:57.131 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:05:57.131531 | orchestrator | 19:05:57.131 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.131588 | orchestrator | 19:05:57.131 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:05:57.131631 | orchestrator | 19:05:57.131 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:05:57.131680 | orchestrator | 19:05:57.131 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:05:57.131730 | orchestrator | 19:05:57.131 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:05:57.131773 | orchestrator | 19:05:57.131 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.131813 | orchestrator | 19:05:57.131 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:05:57.131857 | orchestrator | 19:05:57.131 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:05:57.131902 | orchestrator | 19:05:57.131 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:05:57.131996 | orchestrator | 19:05:57.131 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:05:57.132043 | orchestrator | 19:05:57.132 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.132085 | orchestrator | 19:05:57.132 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:05:57.132135 | orchestrator | 19:05:57.132 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.132162 | orchestrator | 19:05:57.132 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.132196 | orchestrator | 19:05:57.132 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:05:57.132216 | orchestrator | 19:05:57.132 STDOUT terraform:  } 2025-06-22 19:05:57.132243 | orchestrator | 19:05:57.132 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.132277 | orchestrator | 19:05:57.132 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:05:57.132296 | orchestrator | 19:05:57.132 STDOUT terraform:  } 2025-06-22 19:05:57.132321 | orchestrator | 19:05:57.132 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.132362 | orchestrator | 19:05:57.132 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:05:57.132382 | orchestrator | 19:05:57.132 STDOUT terraform:  } 2025-06-22 19:05:57.132418 | orchestrator | 19:05:57.132 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:05:57.132455 | orchestrator | 19:05:57.132 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:05:57.132476 | orchestrator | 19:05:57.132 STDOUT terraform:  } 2025-06-22 19:05:57.132507 | orchestrator | 19:05:57.132 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:05:57.132531 | orchestrator | 19:05:57.132 STDOUT terraform:  + fixed_ip { 2025-06-22 19:05:57.132561 | orchestrator | 19:05:57.132 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-22 19:05:57.132596 | orchestrator | 19:05:57.132 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:05:57.132615 | orchestrator | 19:05:57.132 STDOUT terraform:  } 2025-06-22 19:05:57.132634 | orchestrator | 19:05:57.132 STDOUT terraform:  } 2025-06-22 19:05:57.132698 | orchestrator | 19:05:57.132 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-22 19:05:57.132751 | orchestrator | 19:05:57.132 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-22 19:05:57.132786 | orchestrator | 19:05:57.132 STDOUT terraform:  + force_destroy = false 2025-06-22 19:05:57.132821 | orchestrator | 19:05:57.132 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.132862 | orchestrator | 19:05:57.132 STDOUT terraform:  + port_id = (known after apply) 2025-06-22 19:05:57.132897 | orchestrator | 19:05:57.132 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.132948 | orchestrator | 19:05:57.132 STDOUT terraform:  + router_id = (known after apply) 2025-06-22 19:05:57.132993 | orchestrator | 19:05:57.132 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:05:57.133013 | orchestrator | 19:05:57.133 STDOUT terraform:  } 2025-06-22 19:05:57.133054 | orchestrator | 19:05:57.133 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-22 19:05:57.133095 | orchestrator | 19:05:57.133 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-22 19:05:57.133136 | orchestrator | 19:05:57.133 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:05:57.133187 | orchestrator | 19:05:57.133 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.133216 | orchestrator | 19:05:57.133 STDOUT terraform:  + availability_zone_hints = [ 2025-06-22 19:05:57.133238 | orchestrator | 19:05:57.133 STDOUT terraform:  + "nova", 2025-06-22 19:05:57.133263 | orchestrator | 19:05:57.133 STDOUT terraform:  ] 2025-06-22 19:05:57.133305 | orchestrator | 19:05:57.133 STDOUT terraform:  + distributed = (known after apply) 2025-06-22 19:05:57.133346 | orchestrator | 19:05:57.133 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-22 19:05:57.133400 | orchestrator | 19:05:57.133 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-22 19:05:57.133456 | orchestrator | 19:05:57.133 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-06-22 19:05:57.133500 | orchestrator | 19:05:57.133 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.133537 | orchestrator | 19:05:57.133 STDOUT terraform:  + name = "testbed" 2025-06-22 19:05:57.133580 | orchestrator | 19:05:57.133 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.133627 | orchestrator | 19:05:57.133 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.133662 | orchestrator | 19:05:57.133 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-22 19:05:57.133681 | orchestrator | 19:05:57.133 STDOUT terraform:  } 2025-06-22 19:05:57.133738 | orchestrator | 19:05:57.133 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-22 19:05:57.133800 | orchestrator | 19:05:57.133 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-22 19:05:57.133841 | orchestrator | 19:05:57.133 STDOUT terraform:  + description = "ssh" 2025-06-22 19:05:57.133875 | orchestrator | 19:05:57.133 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:05:57.133911 | orchestrator | 19:05:57.133 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:05:57.133970 | orchestrator | 19:05:57.133 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.134002 | orchestrator | 19:05:57.133 STDOUT terraform:  + port_range_max = 22 2025-06-22 19:05:57.134055 | orchestrator | 19:05:57.134 STDOUT terraform:  + port_range_min = 22 2025-06-22 19:05:57.134093 | orchestrator | 19:05:57.134 STDOUT terraform:  + protocol = "tcp" 2025-06-22 19:05:57.134135 | orchestrator | 19:05:57.134 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.134175 | orchestrator | 19:05:57.134 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:05:57.134220 | orchestrator | 19:05:57.134 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:05:57.134261 | orchestrator | 19:05:57.134 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:05:57.134306 | orchestrator | 19:05:57.134 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:05:57.134347 | orchestrator | 19:05:57.134 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.134367 | orchestrator | 19:05:57.134 STDOUT terraform:  } 2025-06-22 19:05:57.134428 | orchestrator | 19:05:57.134 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-22 19:05:57.134485 | orchestrator | 19:05:57.134 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-22 19:05:57.134526 | orchestrator | 19:05:57.134 STDOUT terraform:  + description = "wireguard" 2025-06-22 19:05:57.134566 | orchestrator | 19:05:57.134 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:05:57.134601 | orchestrator | 19:05:57.134 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:05:57.134645 | orchestrator | 19:05:57.134 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.134684 | orchestrator | 19:05:57.134 STDOUT terraform:  + port_range_max = 51820 2025-06-22 19:05:57.134714 | orchestrator | 19:05:57.134 STDOUT terraform:  + port_range_min = 51820 2025-06-22 19:05:57.134744 | orchestrator | 19:05:57.134 STDOUT terraform:  + protocol = "udp" 2025-06-22 19:05:57.134786 | orchestrator | 19:05:57.134 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.134842 | orchestrator | 19:05:57.134 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:05:57.134892 | orchestrator | 19:05:57.134 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:05:57.134968 | orchestrator | 19:05:57.134 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:05:57.135016 | orchestrator | 19:05:57.134 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:05:57.135069 | orchestrator | 19:05:57.135 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.135090 | orchestrator | 19:05:57.135 STDOUT terraform:  } 2025-06-22 19:05:57.135153 | orchestrator | 19:05:57.135 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-22 19:05:57.135213 | orchestrator | 19:05:57.135 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-22 19:05:57.135252 | orchestrator | 19:05:57.135 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:05:57.135284 | orchestrator | 19:05:57.135 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:05:57.135341 | orchestrator | 19:05:57.135 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.135373 | orchestrator | 19:05:57.135 STDOUT terraform:  + protocol = "tcp" 2025-06-22 19:05:57.135416 | orchestrator | 19:05:57.135 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.135464 | orchestrator | 19:05:57.135 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:05:57.135506 | orchestrator | 19:05:57.135 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:05:57.135551 | orchestrator | 19:05:57.135 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-22 19:05:57.135594 | orchestrator | 19:05:57.135 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:05:57.135657 | orchestrator | 19:05:57.135 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.135679 | orchestrator | 19:05:57.135 STDOUT terraform:  } 2025-06-22 19:05:57.135737 | orchestrator | 19:05:57.135 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-22 19:05:57.135800 | orchestrator | 19:05:57.135 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-22 19:05:57.135838 | orchestrator | 19:05:57.135 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:05:57.135870 | orchestrator | 19:05:57.135 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:05:57.135921 | orchestrator | 19:05:57.135 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.135991 | orchestrator | 19:05:57.135 STDOUT terraform:  + protocol = "udp" 2025-06-22 19:05:57.136034 | orchestrator | 19:05:57.136 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.136075 | orchestrator | 19:05:57.136 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:05:57.136121 | orchestrator | 19:05:57.136 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:05:57.136163 | orchestrator | 19:05:57.136 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-22 19:05:57.136203 | orchestrator | 19:05:57.136 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:05:57.136244 | orchestrator | 19:05:57.136 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.136263 | orchestrator | 19:05:57.136 STDOUT terraform:  } 2025-06-22 19:05:57.136328 | orchestrator | 19:05:57.136 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-22 19:05:57.136388 | orchestrator | 19:05:57.136 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-22 19:05:57.136432 | orchestrator | 19:05:57.136 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:05:57.136463 | orchestrator | 19:05:57.136 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:05:57.136507 | orchestrator | 19:05:57.136 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.136538 | orchestrator | 19:05:57.136 STDOUT terraform:  + protocol = "icmp" 2025-06-22 19:05:57.136588 | orchestrator | 19:05:57.136 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.136634 | orchestrator | 19:05:57.136 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:05:57.136681 | orchestrator | 19:05:57.136 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:05:57.136717 | orchestrator | 19:05:57.136 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:05:57.136758 | orchestrator | 19:05:57.136 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:05:57.136804 | orchestrator | 19:05:57.136 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.136829 | orchestrator | 19:05:57.136 STDOUT terraform:  } 2025-06-22 19:05:57.136886 | orchestrator | 19:05:57.136 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-22 19:05:57.136960 | orchestrator | 19:05:57.136 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-22 19:05:57.136997 | orchestrator | 19:05:57.136 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:05:57.137027 | orchestrator | 19:05:57.137 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:05:57.137079 | orchestrator | 19:05:57.137 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.137109 | orchestrator | 19:05:57.137 STDOUT terraform:  + protocol = "tcp" 2025-06-22 19:05:57.137151 | orchestrator | 19:05:57.137 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.137190 | orchestrator | 19:05:57.137 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:05:57.137231 | orchestrator | 19:05:57.137 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:05:57.137269 | orchestrator | 19:05:57.137 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:05:57.137319 | orchestrator | 19:05:57.137 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:05:57.137364 | orchestrator | 19:05:57.137 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.137383 | orchestrator | 19:05:57.137 STDOUT terraform:  } 2025-06-22 19:05:57.137437 | orchestrator | 19:05:57.137 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-22 19:05:57.137496 | orchestrator | 19:05:57.137 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-22 19:05:57.137538 | orchestrator | 19:05:57.137 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:05:57.137569 | orchestrator | 19:05:57.137 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:05:57.137610 | orchestrator | 19:05:57.137 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.137640 | orchestrator | 19:05:57.137 STDOUT terraform:  + protocol = "udp" 2025-06-22 19:05:57.137688 | orchestrator | 19:05:57.137 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.137734 | orchestrator | 19:05:57.137 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:05:57.137775 | orchestrator | 19:05:57.137 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:05:57.137814 | orchestrator | 19:05:57.137 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:05:57.137855 | orchestrator | 19:05:57.137 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:05:57.137901 | orchestrator | 19:05:57.137 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.137952 | orchestrator | 19:05:57.137 STDOUT terraform:  } 2025-06-22 19:05:57.138011 | orchestrator | 19:05:57.137 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-22 19:05:57.138091 | orchestrator | 19:05:57.138 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-22 19:05:57.138126 | orchestrator | 19:05:57.138 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:05:57.138157 | orchestrator | 19:05:57.138 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:05:57.138199 | orchestrator | 19:05:57.138 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.138239 | orchestrator | 19:05:57.138 STDOUT terraform:  + protocol = "icmp" 2025-06-22 19:05:57.138281 | orchestrator | 19:05:57.138 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.138325 | orchestrator | 19:05:57.138 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:05:57.138373 | orchestrator | 19:05:57.138 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:05:57.138409 | orchestrator | 19:05:57.138 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:05:57.138450 | orchestrator | 19:05:57.138 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:05:57.138495 | orchestrator | 19:05:57.138 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.138515 | orchestrator | 19:05:57.138 STDOUT terraform:  } 2025-06-22 19:05:57.138572 | orchestrator | 19:05:57.138 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-22 19:05:57.138626 | orchestrator | 19:05:57.138 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-22 19:05:57.138664 | orchestrator | 19:05:57.138 STDOUT terraform:  + description = "vrrp" 2025-06-22 19:05:57.138704 | orchestrator | 19:05:57.138 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:05:57.138742 | orchestrator | 19:05:57.138 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:05:57.138806 | orchestrator | 19:05:57.138 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.138840 | orchestrator | 19:05:57.138 STDOUT terraform:  + protocol = "112" 2025-06-22 19:05:57.138882 | orchestrator | 19:05:57.138 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.138945 | orchestrator | 19:05:57.138 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:05:57.138989 | orchestrator | 19:05:57.138 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:05:57.139025 | orchestrator | 19:05:57.138 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:05:57.139072 | orchestrator | 19:05:57.139 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:05:57.139114 | orchestrator | 19:05:57.139 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.139141 | orchestrator | 19:05:57.139 STDOUT terraform:  } 2025-06-22 19:05:57.139195 | orchestrator | 19:05:57.139 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-22 19:05:57.139251 | orchestrator | 19:05:57.139 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-22 19:05:57.139290 | orchestrator | 19:05:57.139 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.139329 | orchestrator | 19:05:57.139 STDOUT terraform:  + description = "management security group" 2025-06-22 19:05:57.139367 | orchestrator | 19:05:57.139 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.139406 | orchestrator | 19:05:57.139 STDOUT terraform:  + name = "testbed-management" 2025-06-22 19:05:57.139441 | orchestrator | 19:05:57.139 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.139484 | orchestrator | 19:05:57.139 STDOUT terraform:  + stateful = (known after apply) 2025-06-22 19:05:57.139524 | orchestrator | 19:05:57.139 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.139552 | orchestrator | 19:05:57.139 STDOUT terraform:  } 2025-06-22 19:05:57.139603 | orchestrator | 19:05:57.139 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-22 19:05:57.139658 | orchestrator | 19:05:57.139 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-22 19:05:57.139695 | orchestrator | 19:05:57.139 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.139736 | orchestrator | 19:05:57.139 STDOUT terraform:  + description = "node security group" 2025-06-22 19:05:57.139774 | orchestrator | 19:05:57.139 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.139804 | orchestrator | 19:05:57.139 STDOUT terraform:  + name = "testbed-node" 2025-06-22 19:05:57.139839 | orchestrator | 19:05:57.139 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.139873 | orchestrator | 19:05:57.139 STDOUT terraform:  + stateful = (known after apply) 2025-06-22 19:05:57.139912 | orchestrator | 19:05:57.139 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.139952 | orchestrator | 19:05:57.139 STDOUT terraform:  } 2025-06-22 19:05:57.140018 | orchestrator | 19:05:57.139 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-22 19:05:57.140072 | orchestrator | 19:05:57.140 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-22 19:05:57.140127 | orchestrator | 19:05:57.140 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:05:57.140163 | orchestrator | 19:05:57.140 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-22 19:05:57.140188 | orchestrator | 19:05:57.140 STDOUT terraform:  + dns_nameservers = [ 2025-06-22 19:05:57.140211 | orchestrator | 19:05:57.140 STDOUT terraform:  + "8.8.8.8", 2025-06-22 19:05:57.140237 | orchestrator | 19:05:57.140 STDOUT terraform:  + "9.9.9.9", 2025-06-22 19:05:57.140268 | orchestrator | 19:05:57.140 STDOUT terraform:  ] 2025-06-22 19:05:57.140295 | orchestrator | 19:05:57.140 STDOUT terraform:  + enable_dhcp = true 2025-06-22 19:05:57.140331 | orchestrator | 19:05:57.140 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-22 19:05:57.140375 | orchestrator | 19:05:57.140 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.140401 | orchestrator | 19:05:57.140 STDOUT terraform:  + ip_version = 4 2025-06-22 19:05:57.140436 | orchestrator | 19:05:57.140 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-22 19:05:57.140471 | orchestrator | 19:05:57.140 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-22 19:05:57.140514 | orchestrator | 19:05:57.140 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-22 19:05:57.140549 | orchestrator | 19:05:57.140 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:05:57.140575 | orchestrator | 19:05:57.140 STDOUT terraform:  + no_gateway = false 2025-06-22 19:05:57.140614 | orchestrator | 19:05:57.140 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:05:57.140650 | orchestrator | 19:05:57.140 STDOUT terraform:  + service_types = (known after apply) 2025-06-22 19:05:57.140686 | orchestrator | 19:05:57.140 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:05:57.140713 | orchestrator | 19:05:57.140 STDOUT terraform:  + allocation_pool { 2025-06-22 19:05:57.140750 | orchestrator | 19:05:57.140 STDOUT terraform:  + end = "192.168.31.250" 2025-06-22 19:05:57.140782 | orchestrator | 19:05:57.140 STDOUT terraform:  + start = "192.168.31.200 2025-06-22 19:05:57.140858 | orchestrator | 19:05:57.140 STDOUT terraform: " 2025-06-22 19:05:57.140878 | orchestrator | 19:05:57.140 STDOUT terraform:  } 2025-06-22 19:05:57.140898 | orchestrator | 19:05:57.140 STDOUT terraform:  } 2025-06-22 19:05:57.140967 | orchestrator | 19:05:57.140 STDOUT terraform:  # terraform_data.image will be created 2025-06-22 19:05:57.141000 | orchestrator | 19:05:57.140 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-22 19:05:57.141034 | orchestrator | 19:05:57.141 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.141061 | orchestrator | 19:05:57.141 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-22 19:05:57.141095 | orchestrator | 19:05:57.141 STDOUT terraform:  + output = (known after apply) 2025-06-22 19:05:57.141126 | orchestrator | 19:05:57.141 STDOUT terraform:  } 2025-06-22 19:05:57.141166 | orchestrator | 19:05:57.141 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-22 19:05:57.141201 | orchestrator | 19:05:57.141 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-22 19:05:57.141231 | orchestrator | 19:05:57.141 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:05:57.141257 | orchestrator | 19:05:57.141 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-22 19:05:57.141287 | orchestrator | 19:05:57.141 STDOUT terraform:  + output = (known after apply) 2025-06-22 19:05:57.141307 | orchestrator | 19:05:57.141 STDOUT terraform:  } 2025-06-22 19:05:57.141345 | orchestrator | 19:05:57.141 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-22 19:05:57.141372 | orchestrator | 19:05:57.141 STDOUT terraform: Changes to Outputs: 2025-06-22 19:05:57.141402 | orchestrator | 19:05:57.141 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-22 19:05:57.141433 | orchestrator | 19:05:57.141 STDOUT terraform:  + private_key = (sensitive value) 2025-06-22 19:05:57.348678 | orchestrator | 19:05:57.348 STDOUT terraform: terraform_data.image: Creating... 2025-06-22 19:05:57.348763 | orchestrator | 19:05:57.348 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-22 19:05:57.348771 | orchestrator | 19:05:57.348 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=03c43d77-cfd8-9458-a902-7bcb384fcc48] 2025-06-22 19:05:57.349056 | orchestrator | 19:05:57.348 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=fd9b4ab4-819a-a756-39c5-0d265f2e358a] 2025-06-22 19:05:57.367920 | orchestrator | 19:05:57.367 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-22 19:05:57.380285 | orchestrator | 19:05:57.380 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-22 19:05:57.383900 | orchestrator | 19:05:57.383 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-22 19:05:57.383986 | orchestrator | 19:05:57.383 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-22 19:05:57.384269 | orchestrator | 19:05:57.384 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-22 19:05:57.384807 | orchestrator | 19:05:57.384 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-22 19:05:57.384914 | orchestrator | 19:05:57.384 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-22 19:05:57.387163 | orchestrator | 19:05:57.387 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-22 19:05:57.387630 | orchestrator | 19:05:57.387 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-22 19:05:57.392048 | orchestrator | 19:05:57.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-22 19:05:57.839657 | orchestrator | 19:05:57.839 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-22 19:05:57.842313 | orchestrator | 19:05:57.841 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-22 19:05:57.848072 | orchestrator | 19:05:57.847 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-22 19:05:57.848379 | orchestrator | 19:05:57.848 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-22 19:05:57.900755 | orchestrator | 19:05:57.900 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-06-22 19:05:57.908271 | orchestrator | 19:05:57.908 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-22 19:06:03.450753 | orchestrator | 19:06:03.450 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=0ed18a12-87c9-48cc-999e-a98f6fccddb5] 2025-06-22 19:06:03.461648 | orchestrator | 19:06:03.461 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-22 19:06:07.386307 | orchestrator | 19:06:07.385 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-22 19:06:07.796266 | orchestrator | 19:06:07.386 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-22 19:06:07.796302 | orchestrator | 19:06:07.386 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-22 19:06:07.796308 | orchestrator | 19:06:07.386 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-22 19:06:07.796312 | orchestrator | 19:06:07.387 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-22 19:06:07.796316 | orchestrator | 19:06:07.393 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-22 19:06:07.849199 | orchestrator | 19:06:07.848 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-22 19:06:07.849270 | orchestrator | 19:06:07.849 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-22 19:06:07.909608 | orchestrator | 19:06:07.909 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-22 19:06:07.975258 | orchestrator | 19:06:07.974 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b] 2025-06-22 19:06:07.981423 | orchestrator | 19:06:07.981 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-22 19:06:07.991675 | orchestrator | 19:06:07.991 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d] 2025-06-22 19:06:07.997774 | orchestrator | 19:06:07.997 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-22 19:06:08.013797 | orchestrator | 19:06:08.013 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=a6d93ccb-5091-4fbc-bc32-8344f81d146e] 2025-06-22 19:06:08.019820 | orchestrator | 19:06:08.019 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-22 19:06:08.022158 | orchestrator | 19:06:08.021 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=a34530e6-164e-4284-ba94-1682f51170e6] 2025-06-22 19:06:08.030086 | orchestrator | 19:06:08.029 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-22 19:06:08.038221 | orchestrator | 19:06:08.037 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=4258f07d-32b4-4c40-a297-43ff401da985] 2025-06-22 19:06:08.044162 | orchestrator | 19:06:08.043 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-22 19:06:08.056267 | orchestrator | 19:06:08.056 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f] 2025-06-22 19:06:08.066781 | orchestrator | 19:06:08.066 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-22 19:06:08.120767 | orchestrator | 19:06:08.120 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=626bb53b-03fa-4cf2-9c74-01c88e74436c] 2025-06-22 19:06:08.137018 | orchestrator | 19:06:08.136 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-22 19:06:08.138089 | orchestrator | 19:06:08.137 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=f3f79088-b1f4-4694-a8b0-38e1aef3e3c0] 2025-06-22 19:06:08.142594 | orchestrator | 19:06:08.142 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=019bbb3055e3409b5335147a71e490a565a3024d] 2025-06-22 19:06:08.149015 | orchestrator | 19:06:08.148 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=d64e86dd-c29d-4edc-bf55-6282aedab238] 2025-06-22 19:06:08.154601 | orchestrator | 19:06:08.154 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-22 19:06:08.156852 | orchestrator | 19:06:08.156 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-22 19:06:08.161034 | orchestrator | 19:06:08.160 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=d03703af7cb4c21cc6f7f17c14d58a3ff9910fd9] 2025-06-22 19:06:13.464873 | orchestrator | 19:06:13.464 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-22 19:06:13.796856 | orchestrator | 19:06:13.796 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=adf36478-93bb-42b4-9563-c835c13843ea] 2025-06-22 19:06:14.059876 | orchestrator | 19:06:14.059 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=5f5ea741-c53c-4737-89ab-f4bd5abaac60] 2025-06-22 19:06:14.065622 | orchestrator | 19:06:14.065 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-22 19:06:17.982913 | orchestrator | 19:06:17.982 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-22 19:06:17.998223 | orchestrator | 19:06:17.998 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-22 19:06:18.020532 | orchestrator | 19:06:18.020 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-22 19:06:18.031088 | orchestrator | 19:06:18.030 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-22 19:06:18.045205 | orchestrator | 19:06:18.044 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-22 19:06:18.067534 | orchestrator | 19:06:18.067 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-22 19:06:18.357461 | orchestrator | 19:06:18.357 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=afcfd86f-9b82-44c6-98eb-03971d4f7354] 2025-06-22 19:06:18.409894 | orchestrator | 19:06:18.409 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=ebef9530-476f-4d45-9413-6d9a7f459b52] 2025-06-22 19:06:18.445484 | orchestrator | 19:06:18.445 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=e33bbb77-8230-4722-836e-e6cdd6981157] 2025-06-22 19:06:18.461918 | orchestrator | 19:06:18.461 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=dccc5f96-71f7-47e2-8549-6be2ae231111] 2025-06-22 19:06:18.484898 | orchestrator | 19:06:18.484 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=45370c48-ef13-40f4-9d83-898af248b31f] 2025-06-22 19:06:18.516784 | orchestrator | 19:06:18.516 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=0f30cd8c-78a6-441b-83bd-3e59c68043fe] 2025-06-22 19:06:21.667062 | orchestrator | 19:06:21.666 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=cdeee2ba-c146-44d5-b5ad-daf1f87b49c1] 2025-06-22 19:06:21.675811 | orchestrator | 19:06:21.674 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-22 19:06:21.675901 | orchestrator | 19:06:21.675 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-22 19:06:21.675933 | orchestrator | 19:06:21.675 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-22 19:06:21.915044 | orchestrator | 19:06:21.912 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=bb3272a2-0054-4c6c-adcc-77850408ed58] 2025-06-22 19:06:21.922683 | orchestrator | 19:06:21.922 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-22 19:06:21.922763 | orchestrator | 19:06:21.922 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-22 19:06:21.924164 | orchestrator | 19:06:21.923 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-22 19:06:21.924212 | orchestrator | 19:06:21.923 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-22 19:06:21.927338 | orchestrator | 19:06:21.927 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-22 19:06:21.934305 | orchestrator | 19:06:21.934 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-22 19:06:21.997904 | orchestrator | 19:06:21.997 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=ca0da4e0-64c3-42f0-a6eb-7734739d192e] 2025-06-22 19:06:22.010149 | orchestrator | 19:06:22.009 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-22 19:06:22.012070 | orchestrator | 19:06:22.011 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-22 19:06:22.012632 | orchestrator | 19:06:22.012 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-22 19:06:22.127212 | orchestrator | 19:06:22.126 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=2681cf4b-cb9a-4646-80e0-ca0d9ef54729] 2025-06-22 19:06:22.140798 | orchestrator | 19:06:22.140 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-22 19:06:22.210189 | orchestrator | 19:06:22.209 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=9b692a7e-d071-4a61-a9a9-14215341b63b] 2025-06-22 19:06:22.229918 | orchestrator | 19:06:22.229 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-22 19:06:22.375576 | orchestrator | 19:06:22.375 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=00f8731e-53e5-447d-b12b-540cbadf2646] 2025-06-22 19:06:22.388676 | orchestrator | 19:06:22.388 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-22 19:06:22.457154 | orchestrator | 19:06:22.456 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=ff84277d-960d-41ad-8ce0-e52f43fb9ecd] 2025-06-22 19:06:22.471739 | orchestrator | 19:06:22.471 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-22 19:06:22.652323 | orchestrator | 19:06:22.651 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=59f3d6bd-4433-4a37-88cd-1f0bae2675a0] 2025-06-22 19:06:22.665534 | orchestrator | 19:06:22.665 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-22 19:06:22.674861 | orchestrator | 19:06:22.674 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=b1ce18ba-fcf2-452e-89a8-4478355da3f9] 2025-06-22 19:06:22.688642 | orchestrator | 19:06:22.688 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-22 19:06:23.288352 | orchestrator | 19:06:23.287 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=cc4c2d93-dddf-4370-90cb-dd5da29aadff] 2025-06-22 19:06:23.298495 | orchestrator | 19:06:23.298 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-22 19:06:23.462975 | orchestrator | 19:06:23.462 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=9dc6307a-1990-4e81-a62b-d7d17313508f] 2025-06-22 19:06:23.482384 | orchestrator | 19:06:23.482 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=1986681f-3e07-443d-a09c-f9e7a52c335e] 2025-06-22 19:06:27.848705 | orchestrator | 19:06:27.848 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=aaa2d0ba-02b7-4cb8-ba65-cb5b041c13de] 2025-06-22 19:06:27.909678 | orchestrator | 19:06:27.909 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=a15c8862-17e6-4694-a47c-09e84b4f2c4d] 2025-06-22 19:06:28.074413 | orchestrator | 19:06:28.074 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=e0a5ee73-9400-4c99-9daa-4ad1f5633898] 2025-06-22 19:06:28.135696 | orchestrator | 19:06:28.135 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=9876bcf0-9b40-4a98-bfc2-bc51aba945a5] 2025-06-22 19:06:28.450749 | orchestrator | 19:06:28.450 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=32d20620-4cfd-4125-9ec9-35edeae39de9] 2025-06-22 19:06:28.748768 | orchestrator | 19:06:28.748 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 7s [id=cc5def26-5d2a-462c-becb-e1a39a79bb17] 2025-06-22 19:06:29.000686 | orchestrator | 19:06:29.000 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=1ce45f4d-32c8-41f9-8cd7-033bad9a9a63] 2025-06-22 19:06:29.730114 | orchestrator | 19:06:29.728 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=e65f0b2e-3bed-4ba2-a981-d08a92d0546d] 2025-06-22 19:06:29.770546 | orchestrator | 19:06:29.770 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-22 19:06:29.771194 | orchestrator | 19:06:29.771 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-22 19:06:29.774639 | orchestrator | 19:06:29.774 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-22 19:06:29.775078 | orchestrator | 19:06:29.775 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-22 19:06:29.777453 | orchestrator | 19:06:29.777 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-22 19:06:29.783239 | orchestrator | 19:06:29.783 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-22 19:06:29.789327 | orchestrator | 19:06:29.789 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-22 19:06:36.824161 | orchestrator | 19:06:36.823 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=2cd2b24f-8d39-473f-97ff-5ad1ea871e3e] 2025-06-22 19:06:36.841404 | orchestrator | 19:06:36.841 STDOUT terraform: local_file.inventory: Creating... 2025-06-22 19:06:36.841675 | orchestrator | 19:06:36.841 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-22 19:06:36.842778 | orchestrator | 19:06:36.842 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-22 19:06:36.849101 | orchestrator | 19:06:36.848 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=91dd7bc5e7312db539c8e98bcafda53078e84914] 2025-06-22 19:06:36.849770 | orchestrator | 19:06:36.849 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=298f2a0a1992d477f429657f0366aa77f45ed757] 2025-06-22 19:06:37.678702 | orchestrator | 19:06:37.678 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=2cd2b24f-8d39-473f-97ff-5ad1ea871e3e] 2025-06-22 19:06:39.775109 | orchestrator | 19:06:39.774 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-22 19:06:39.776063 | orchestrator | 19:06:39.775 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-22 19:06:39.776296 | orchestrator | 19:06:39.776 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-22 19:06:39.780477 | orchestrator | 19:06:39.780 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-22 19:06:39.781945 | orchestrator | 19:06:39.781 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-22 19:06:39.791298 | orchestrator | 19:06:39.791 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-22 19:06:49.775699 | orchestrator | 19:06:49.775 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-22 19:06:49.776671 | orchestrator | 19:06:49.776 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-22 19:06:49.776722 | orchestrator | 19:06:49.776 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-22 19:06:49.780970 | orchestrator | 19:06:49.780 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-22 19:06:49.782299 | orchestrator | 19:06:49.782 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-22 19:06:49.791694 | orchestrator | 19:06:49.791 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-22 19:06:59.778522 | orchestrator | 19:06:59.778 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-06-22 19:06:59.778681 | orchestrator | 19:06:59.778 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-06-22 19:06:59.778733 | orchestrator | 19:06:59.778 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-22 19:06:59.781878 | orchestrator | 19:06:59.781 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-06-22 19:06:59.783064 | orchestrator | 19:06:59.782 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-06-22 19:06:59.792353 | orchestrator | 19:06:59.792 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-22 19:07:00.373818 | orchestrator | 19:07:00.373 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=e9af2a49-da97-4560-b476-17e2a006ec45] 2025-06-22 19:07:00.383716 | orchestrator | 19:07:00.383 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=8fc239ae-0c12-438f-94d7-fe2b569f4426] 2025-06-22 19:07:00.405054 | orchestrator | 19:07:00.404 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=55452ac9-c223-4e98-842b-8230f5cbf9e0] 2025-06-22 19:07:00.574338 | orchestrator | 19:07:00.573 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=454bf228-165f-4921-9f44-9d9458b9e881] 2025-06-22 19:07:09.782835 | orchestrator | 19:07:09.782 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-06-22 19:07:09.793184 | orchestrator | 19:07:09.792 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-06-22 19:07:10.677350 | orchestrator | 19:07:10.676 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=cbe3b3a3-b214-47df-820d-1167e72edba6] 2025-06-22 19:07:10.891235 | orchestrator | 19:07:10.890 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=5e9408fd-8fe3-4bbc-9283-5c7d4507bddc] 2025-06-22 19:07:10.900384 | orchestrator | 19:07:10.900 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-22 19:07:10.910176 | orchestrator | 19:07:10.909 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2015903271217335964] 2025-06-22 19:07:10.913785 | orchestrator | 19:07:10.913 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-22 19:07:10.926187 | orchestrator | 19:07:10.920 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-22 19:07:10.926269 | orchestrator | 19:07:10.921 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-22 19:07:10.926280 | orchestrator | 19:07:10.921 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-22 19:07:10.945767 | orchestrator | 19:07:10.944 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-22 19:07:10.950082 | orchestrator | 19:07:10.949 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-22 19:07:10.951353 | orchestrator | 19:07:10.951 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-22 19:07:10.953821 | orchestrator | 19:07:10.953 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-22 19:07:10.954395 | orchestrator | 19:07:10.954 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-22 19:07:10.959602 | orchestrator | 19:07:10.959 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-22 19:07:16.265089 | orchestrator | 19:07:16.264 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=e9af2a49-da97-4560-b476-17e2a006ec45/a34530e6-164e-4284-ba94-1682f51170e6] 2025-06-22 19:07:16.267558 | orchestrator | 19:07:16.267 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=454bf228-165f-4921-9f44-9d9458b9e881/626bb53b-03fa-4cf2-9c74-01c88e74436c] 2025-06-22 19:07:16.298579 | orchestrator | 19:07:16.297 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=55452ac9-c223-4e98-842b-8230f5cbf9e0/26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d] 2025-06-22 19:07:16.307264 | orchestrator | 19:07:16.306 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=e9af2a49-da97-4560-b476-17e2a006ec45/d64e86dd-c29d-4edc-bf55-6282aedab238] 2025-06-22 19:07:16.309389 | orchestrator | 19:07:16.308 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=454bf228-165f-4921-9f44-9d9458b9e881/4258f07d-32b4-4c40-a297-43ff401da985] 2025-06-22 19:07:16.333465 | orchestrator | 19:07:16.333 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=e9af2a49-da97-4560-b476-17e2a006ec45/9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b] 2025-06-22 19:07:16.336396 | orchestrator | 19:07:16.336 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=55452ac9-c223-4e98-842b-8230f5cbf9e0/f3f79088-b1f4-4694-a8b0-38e1aef3e3c0] 2025-06-22 19:07:16.432558 | orchestrator | 19:07:16.432 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=454bf228-165f-4921-9f44-9d9458b9e881/a6d93ccb-5091-4fbc-bc32-8344f81d146e] 2025-06-22 19:07:16.570473 | orchestrator | 19:07:16.569 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=55452ac9-c223-4e98-842b-8230f5cbf9e0/e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f] 2025-06-22 19:07:20.959496 | orchestrator | 19:07:20.959 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-22 19:07:30.960651 | orchestrator | 19:07:30.960 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-22 19:07:31.445286 | orchestrator | 19:07:31.444 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=84bd8ed8-492d-4301-90ad-35564937464b] 2025-06-22 19:07:33.177434 | orchestrator | 19:07:33.176 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-22 19:07:33.177570 | orchestrator | 19:07:33.177 STDOUT terraform: Outputs: 2025-06-22 19:07:33.177589 | orchestrator | 19:07:33.177 STDOUT terraform: manager_address = 2025-06-22 19:07:33.177601 | orchestrator | 19:07:33.177 STDOUT terraform: private_key = 2025-06-22 19:07:33.360014 | orchestrator | ok: Runtime: 0:01:46.841496 2025-06-22 19:07:33.396372 | 2025-06-22 19:07:33.396505 | TASK [Fetch manager address] 2025-06-22 19:07:33.831462 | orchestrator | ok 2025-06-22 19:07:33.844035 | 2025-06-22 19:07:33.844192 | TASK [Set manager_host address] 2025-06-22 19:07:33.924129 | orchestrator | ok 2025-06-22 19:07:33.933454 | 2025-06-22 19:07:33.933597 | LOOP [Update ansible collections] 2025-06-22 19:07:35.930318 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-22 19:07:35.930752 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 19:07:35.930824 | orchestrator | Starting galaxy collection install process 2025-06-22 19:07:35.931012 | orchestrator | Process install dependency map 2025-06-22 19:07:35.931066 | orchestrator | Starting collection install process 2025-06-22 19:07:35.931098 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-06-22 19:07:35.931134 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-06-22 19:07:35.931169 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-22 19:07:35.931263 | orchestrator | ok: Item: commons Runtime: 0:00:01.672509 2025-06-22 19:07:37.147306 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-22 19:07:37.147478 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 19:07:37.147532 | orchestrator | Starting galaxy collection install process 2025-06-22 19:07:37.147574 | orchestrator | Process install dependency map 2025-06-22 19:07:37.147613 | orchestrator | Starting collection install process 2025-06-22 19:07:37.147648 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-06-22 19:07:37.147682 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-06-22 19:07:37.147715 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-22 19:07:37.147768 | orchestrator | ok: Item: services Runtime: 0:00:00.944543 2025-06-22 19:07:37.164736 | 2025-06-22 19:07:37.164890 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-22 19:07:47.675533 | orchestrator | ok 2025-06-22 19:07:47.683816 | 2025-06-22 19:07:47.683932 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-22 19:08:47.727789 | orchestrator | ok 2025-06-22 19:08:47.735013 | 2025-06-22 19:08:47.735159 | TASK [Fetch manager ssh hostkey] 2025-06-22 19:08:49.309163 | orchestrator | Output suppressed because no_log was given 2025-06-22 19:08:49.316960 | 2025-06-22 19:08:49.317087 | TASK [Get ssh keypair from terraform environment] 2025-06-22 19:08:49.850302 | orchestrator | ok: Runtime: 0:00:00.005305 2025-06-22 19:08:49.867587 | 2025-06-22 19:08:49.867728 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-22 19:08:49.903610 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-22 19:08:49.915173 | 2025-06-22 19:08:49.915379 | TASK [Run manager part 0] 2025-06-22 19:08:51.086832 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 19:08:51.133511 | orchestrator | 2025-06-22 19:08:51.133563 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-22 19:08:51.133570 | orchestrator | 2025-06-22 19:08:51.133584 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-22 19:08:52.740264 | orchestrator | ok: [testbed-manager] 2025-06-22 19:08:52.740346 | orchestrator | 2025-06-22 19:08:52.740394 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-22 19:08:52.740417 | orchestrator | 2025-06-22 19:08:52.740438 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:08:55.028420 | orchestrator | ok: [testbed-manager] 2025-06-22 19:08:55.028523 | orchestrator | 2025-06-22 19:08:55.028542 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-22 19:08:55.680344 | orchestrator | ok: [testbed-manager] 2025-06-22 19:08:55.680398 | orchestrator | 2025-06-22 19:08:55.680411 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-22 19:08:55.721295 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:08:55.721334 | orchestrator | 2025-06-22 19:08:55.721343 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-22 19:08:55.744978 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:08:55.745016 | orchestrator | 2025-06-22 19:08:55.745024 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-22 19:08:55.769598 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:08:55.769635 | orchestrator | 2025-06-22 19:08:55.769641 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-22 19:08:55.802759 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:08:55.802812 | orchestrator | 2025-06-22 19:08:55.802824 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-22 19:08:55.841379 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:08:55.841420 | orchestrator | 2025-06-22 19:08:55.841428 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-22 19:08:55.877865 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:08:55.877906 | orchestrator | 2025-06-22 19:08:55.877915 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-22 19:08:55.907300 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:08:55.907337 | orchestrator | 2025-06-22 19:08:55.907345 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-22 19:08:56.687521 | orchestrator | changed: [testbed-manager] 2025-06-22 19:08:56.687588 | orchestrator | 2025-06-22 19:08:56.687604 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-22 19:12:05.813560 | orchestrator | changed: [testbed-manager] 2025-06-22 19:12:05.813633 | orchestrator | 2025-06-22 19:12:05.813653 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-22 19:16:38.937278 | orchestrator | changed: [testbed-manager] 2025-06-22 19:16:38.937400 | orchestrator | 2025-06-22 19:16:38.937421 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-22 19:16:59.201946 | orchestrator | changed: [testbed-manager] 2025-06-22 19:16:59.202147 | orchestrator | 2025-06-22 19:16:59.202170 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-22 19:17:07.868032 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:07.868124 | orchestrator | 2025-06-22 19:17:07.868141 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-22 19:17:07.911726 | orchestrator | ok: [testbed-manager] 2025-06-22 19:17:07.911816 | orchestrator | 2025-06-22 19:17:07.911834 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-22 19:17:08.731189 | orchestrator | ok: [testbed-manager] 2025-06-22 19:17:08.731286 | orchestrator | 2025-06-22 19:17:08.731306 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-22 19:17:09.486842 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:09.486944 | orchestrator | 2025-06-22 19:17:09.486970 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-22 19:17:15.930719 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:15.930761 | orchestrator | 2025-06-22 19:17:15.930783 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-22 19:17:22.095171 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:22.095246 | orchestrator | 2025-06-22 19:17:22.095259 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-22 19:17:24.742991 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:24.743084 | orchestrator | 2025-06-22 19:17:24.743101 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-22 19:17:26.516345 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:26.516453 | orchestrator | 2025-06-22 19:17:26.516470 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-22 19:17:27.641699 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-22 19:17:27.641800 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-22 19:17:27.641816 | orchestrator | 2025-06-22 19:17:27.641829 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-22 19:17:27.686522 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-22 19:17:27.686652 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-22 19:17:27.686667 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-22 19:17:27.686679 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-22 19:17:41.969960 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-22 19:17:41.970210 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-22 19:17:41.970232 | orchestrator | 2025-06-22 19:17:41.970246 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-22 19:17:42.577286 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:42.577404 | orchestrator | 2025-06-22 19:17:42.577431 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-22 19:19:05.025504 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-22 19:19:05.025615 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-22 19:19:05.025637 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-22 19:19:05.025762 | orchestrator | 2025-06-22 19:19:05.025779 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-22 19:19:07.340329 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-22 19:19:07.340451 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-22 19:19:07.340477 | orchestrator | 2025-06-22 19:19:07.340500 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-22 19:19:07.340520 | orchestrator | 2025-06-22 19:19:07.340540 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:19:08.760061 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:08.760144 | orchestrator | 2025-06-22 19:19:08.760162 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-22 19:19:08.811673 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:08.811749 | orchestrator | 2025-06-22 19:19:08.811763 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-22 19:19:08.888700 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:08.888753 | orchestrator | 2025-06-22 19:19:08.888761 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-22 19:19:09.729806 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:09.729848 | orchestrator | 2025-06-22 19:19:09.729858 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-22 19:19:10.460408 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:10.460502 | orchestrator | 2025-06-22 19:19:10.460519 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-22 19:19:11.990544 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-22 19:19:11.990587 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-22 19:19:11.990595 | orchestrator | 2025-06-22 19:19:11.990611 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-22 19:19:13.433267 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:13.433353 | orchestrator | 2025-06-22 19:19:13.433364 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-22 19:19:15.309828 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:19:15.309884 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-22 19:19:15.309893 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:19:15.309900 | orchestrator | 2025-06-22 19:19:15.309909 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-06-22 19:19:15.373183 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:19:15.373228 | orchestrator | 2025-06-22 19:19:15.373238 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-22 19:19:15.990788 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:15.990836 | orchestrator | 2025-06-22 19:19:15.990849 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-22 19:19:16.067041 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:19:16.067100 | orchestrator | 2025-06-22 19:19:16.067115 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-22 19:19:16.985552 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:19:16.985620 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:16.985636 | orchestrator | 2025-06-22 19:19:16.985649 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-22 19:19:17.027008 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:19:17.027054 | orchestrator | 2025-06-22 19:19:17.027063 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-22 19:19:17.065084 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:19:17.065142 | orchestrator | 2025-06-22 19:19:17.065157 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-22 19:19:17.101434 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:19:17.101496 | orchestrator | 2025-06-22 19:19:17.101514 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-22 19:19:17.156665 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:19:17.156725 | orchestrator | 2025-06-22 19:19:17.156742 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-22 19:19:17.897997 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:17.898091 | orchestrator | 2025-06-22 19:19:17.898107 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-22 19:19:17.898120 | orchestrator | 2025-06-22 19:19:17.898132 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:19:19.332422 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:19.332488 | orchestrator | 2025-06-22 19:19:19.332503 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-22 19:19:20.276713 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:20.276795 | orchestrator | 2025-06-22 19:19:20.276810 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:19:20.276824 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-06-22 19:19:20.276836 | orchestrator | 2025-06-22 19:19:20.821857 | orchestrator | ok: Runtime: 0:10:30.185620 2025-06-22 19:19:20.834906 | 2025-06-22 19:19:20.835047 | TASK [Point out that the log in on the manager is now possible] 2025-06-22 19:19:20.880398 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-22 19:19:20.891094 | 2025-06-22 19:19:20.891222 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-22 19:19:20.936874 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-22 19:19:20.948282 | 2025-06-22 19:19:20.948430 | TASK [Run manager part 1 + 2] 2025-06-22 19:19:22.188852 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 19:19:22.276249 | orchestrator | 2025-06-22 19:19:22.276299 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-22 19:19:22.276306 | orchestrator | 2025-06-22 19:19:22.276319 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:19:25.562580 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:25.562962 | orchestrator | 2025-06-22 19:19:25.563020 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-22 19:19:25.596972 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:19:25.597053 | orchestrator | 2025-06-22 19:19:25.597072 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-22 19:19:25.628430 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:25.628634 | orchestrator | 2025-06-22 19:19:25.628658 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-22 19:19:25.665788 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:25.665869 | orchestrator | 2025-06-22 19:19:25.665886 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-22 19:19:25.736935 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:25.737017 | orchestrator | 2025-06-22 19:19:25.737035 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-22 19:19:25.800960 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:25.801046 | orchestrator | 2025-06-22 19:19:25.801064 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-22 19:19:25.843260 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-22 19:19:25.843360 | orchestrator | 2025-06-22 19:19:25.843376 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-22 19:19:26.601105 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:26.601842 | orchestrator | 2025-06-22 19:19:26.601874 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-22 19:19:26.655041 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:19:26.655123 | orchestrator | 2025-06-22 19:19:26.655137 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-22 19:19:28.094355 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:28.094486 | orchestrator | 2025-06-22 19:19:28.094505 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-22 19:19:28.673137 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:28.673221 | orchestrator | 2025-06-22 19:19:28.673237 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-22 19:19:29.856508 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:29.856611 | orchestrator | 2025-06-22 19:19:29.856641 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-22 19:19:43.006140 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:43.006195 | orchestrator | 2025-06-22 19:19:43.006201 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-22 19:19:43.692625 | orchestrator | ok: [testbed-manager] 2025-06-22 19:19:43.692715 | orchestrator | 2025-06-22 19:19:43.692732 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-22 19:19:43.745132 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:19:43.745198 | orchestrator | 2025-06-22 19:19:43.745209 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-22 19:19:44.760642 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:44.760706 | orchestrator | 2025-06-22 19:19:44.760720 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-22 19:19:45.732920 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:45.733005 | orchestrator | 2025-06-22 19:19:45.733021 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-22 19:19:46.304877 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:46.304912 | orchestrator | 2025-06-22 19:19:46.304919 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-22 19:19:46.345511 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-22 19:19:46.345634 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-22 19:19:46.345654 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-22 19:19:46.345668 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-22 19:19:55.426878 | orchestrator | changed: [testbed-manager] 2025-06-22 19:19:55.426976 | orchestrator | 2025-06-22 19:19:55.426992 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-22 19:20:04.417188 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-22 19:20:04.417386 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-22 19:20:04.417406 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-22 19:20:04.417448 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-22 19:20:04.417468 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-22 19:20:04.417479 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-22 19:20:04.417491 | orchestrator | 2025-06-22 19:20:04.417503 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-22 19:20:05.503092 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:05.503175 | orchestrator | 2025-06-22 19:20:05.503192 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-22 19:20:05.545671 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:05.545731 | orchestrator | 2025-06-22 19:20:05.545739 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-22 19:20:08.684681 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:08.684775 | orchestrator | 2025-06-22 19:20:08.684793 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-22 19:20:08.728584 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:08.728673 | orchestrator | 2025-06-22 19:20:08.728689 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-22 19:21:44.437401 | orchestrator | changed: [testbed-manager] 2025-06-22 19:21:44.437457 | orchestrator | 2025-06-22 19:21:44.437464 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-22 19:21:45.524201 | orchestrator | ok: [testbed-manager] 2025-06-22 19:21:45.524239 | orchestrator | 2025-06-22 19:21:45.524246 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:21:45.524253 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-22 19:21:45.524258 | orchestrator | 2025-06-22 19:21:46.076318 | orchestrator | ok: Runtime: 0:02:24.347855 2025-06-22 19:21:46.093915 | 2025-06-22 19:21:46.094095 | TASK [Reboot manager] 2025-06-22 19:21:47.632601 | orchestrator | ok: Runtime: 0:00:00.920337 2025-06-22 19:21:47.648492 | 2025-06-22 19:21:47.648646 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-22 19:22:01.688865 | orchestrator | ok 2025-06-22 19:22:01.695958 | 2025-06-22 19:22:01.696078 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-22 19:23:01.745310 | orchestrator | ok 2025-06-22 19:23:01.754625 | 2025-06-22 19:23:01.754750 | TASK [Deploy manager + bootstrap nodes] 2025-06-22 19:23:04.185788 | orchestrator | 2025-06-22 19:23:04.185977 | orchestrator | # DEPLOY MANAGER 2025-06-22 19:23:04.186003 | orchestrator | 2025-06-22 19:23:04.186071 | orchestrator | + set -e 2025-06-22 19:23:04.186089 | orchestrator | + echo 2025-06-22 19:23:04.186103 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-22 19:23:04.186120 | orchestrator | + echo 2025-06-22 19:23:04.186167 | orchestrator | + cat /opt/manager-vars.sh 2025-06-22 19:23:04.189409 | orchestrator | export NUMBER_OF_NODES=6 2025-06-22 19:23:04.189450 | orchestrator | 2025-06-22 19:23:04.189544 | orchestrator | export CEPH_VERSION=reef 2025-06-22 19:23:04.189560 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-22 19:23:04.189572 | orchestrator | export MANAGER_VERSION=9.1.0 2025-06-22 19:23:04.189594 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-22 19:23:04.189605 | orchestrator | 2025-06-22 19:23:04.189622 | orchestrator | export ARA=false 2025-06-22 19:23:04.189634 | orchestrator | export DEPLOY_MODE=manager 2025-06-22 19:23:04.189651 | orchestrator | export TEMPEST=false 2025-06-22 19:23:04.189662 | orchestrator | export IS_ZUUL=true 2025-06-22 19:23:04.189673 | orchestrator | 2025-06-22 19:23:04.189690 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.98 2025-06-22 19:23:04.189702 | orchestrator | export EXTERNAL_API=false 2025-06-22 19:23:04.189712 | orchestrator | 2025-06-22 19:23:04.189722 | orchestrator | export IMAGE_USER=ubuntu 2025-06-22 19:23:04.189736 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-22 19:23:04.189746 | orchestrator | 2025-06-22 19:23:04.189757 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-22 19:23:04.189776 | orchestrator | 2025-06-22 19:23:04.189787 | orchestrator | + echo 2025-06-22 19:23:04.189799 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 19:23:04.190526 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 19:23:04.190584 | orchestrator | ++ INTERACTIVE=false 2025-06-22 19:23:04.190603 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 19:23:04.190638 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 19:23:04.190676 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 19:23:04.190690 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 19:23:04.190702 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 19:23:04.190714 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 19:23:04.190726 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 19:23:04.190738 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 19:23:04.190788 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 19:23:04.190805 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 19:23:04.190817 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 19:23:04.190829 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 19:23:04.190850 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 19:23:04.190861 | orchestrator | ++ export ARA=false 2025-06-22 19:23:04.190872 | orchestrator | ++ ARA=false 2025-06-22 19:23:04.190883 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 19:23:04.190893 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 19:23:04.190904 | orchestrator | ++ export TEMPEST=false 2025-06-22 19:23:04.190914 | orchestrator | ++ TEMPEST=false 2025-06-22 19:23:04.190925 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 19:23:04.190935 | orchestrator | ++ IS_ZUUL=true 2025-06-22 19:23:04.190946 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.98 2025-06-22 19:23:04.190957 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.98 2025-06-22 19:23:04.190968 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 19:23:04.190978 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 19:23:04.190989 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 19:23:04.190999 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 19:23:04.191010 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 19:23:04.191021 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 19:23:04.191230 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 19:23:04.191248 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 19:23:04.191259 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-22 19:23:04.242441 | orchestrator | + docker version 2025-06-22 19:23:04.520319 | orchestrator | Client: Docker Engine - Community 2025-06-22 19:23:04.520415 | orchestrator | Version: 27.5.1 2025-06-22 19:23:04.520433 | orchestrator | API version: 1.47 2025-06-22 19:23:04.520444 | orchestrator | Go version: go1.22.11 2025-06-22 19:23:04.520483 | orchestrator | Git commit: 9f9e405 2025-06-22 19:23:04.520496 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-22 19:23:04.520509 | orchestrator | OS/Arch: linux/amd64 2025-06-22 19:23:04.520519 | orchestrator | Context: default 2025-06-22 19:23:04.520530 | orchestrator | 2025-06-22 19:23:04.520541 | orchestrator | Server: Docker Engine - Community 2025-06-22 19:23:04.520552 | orchestrator | Engine: 2025-06-22 19:23:04.520563 | orchestrator | Version: 27.5.1 2025-06-22 19:23:04.520574 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-22 19:23:04.520614 | orchestrator | Go version: go1.22.11 2025-06-22 19:23:04.520626 | orchestrator | Git commit: 4c9b3b0 2025-06-22 19:23:04.520637 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-22 19:23:04.520647 | orchestrator | OS/Arch: linux/amd64 2025-06-22 19:23:04.520658 | orchestrator | Experimental: false 2025-06-22 19:23:04.520669 | orchestrator | containerd: 2025-06-22 19:23:04.520680 | orchestrator | Version: 1.7.27 2025-06-22 19:23:04.520690 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-22 19:23:04.520702 | orchestrator | runc: 2025-06-22 19:23:04.520713 | orchestrator | Version: 1.2.5 2025-06-22 19:23:04.520723 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-22 19:23:04.520734 | orchestrator | docker-init: 2025-06-22 19:23:04.520745 | orchestrator | Version: 0.19.0 2025-06-22 19:23:04.520756 | orchestrator | GitCommit: de40ad0 2025-06-22 19:23:04.524228 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-22 19:23:04.533781 | orchestrator | + set -e 2025-06-22 19:23:04.533820 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 19:23:04.533832 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 19:23:04.533843 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 19:23:04.533854 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 19:23:04.533917 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 19:23:04.533940 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 19:23:04.533962 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 19:23:04.533985 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 19:23:04.534005 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 19:23:04.534083 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 19:23:04.534096 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 19:23:04.534107 | orchestrator | ++ export ARA=false 2025-06-22 19:23:04.534118 | orchestrator | ++ ARA=false 2025-06-22 19:23:04.534128 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 19:23:04.534139 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 19:23:04.534150 | orchestrator | ++ export TEMPEST=false 2025-06-22 19:23:04.534160 | orchestrator | ++ TEMPEST=false 2025-06-22 19:23:04.534171 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 19:23:04.534181 | orchestrator | ++ IS_ZUUL=true 2025-06-22 19:23:04.534192 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.98 2025-06-22 19:23:04.534203 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.98 2025-06-22 19:23:04.534214 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 19:23:04.534224 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 19:23:04.534235 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 19:23:04.534245 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 19:23:04.534256 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 19:23:04.534267 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 19:23:04.534285 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 19:23:04.534296 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 19:23:04.534307 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 19:23:04.534318 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 19:23:04.534329 | orchestrator | ++ INTERACTIVE=false 2025-06-22 19:23:04.534339 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 19:23:04.534354 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 19:23:04.534365 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-22 19:23:04.534376 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-06-22 19:23:04.540833 | orchestrator | + set -e 2025-06-22 19:23:04.540879 | orchestrator | + VERSION=9.1.0 2025-06-22 19:23:04.540895 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-06-22 19:23:04.548660 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-22 19:23:04.548693 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-22 19:23:04.553240 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-22 19:23:04.558712 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-06-22 19:23:04.567893 | orchestrator | + set -e 2025-06-22 19:23:04.567935 | orchestrator | /opt/configuration ~ 2025-06-22 19:23:04.567947 | orchestrator | + pushd /opt/configuration 2025-06-22 19:23:04.567958 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 19:23:04.569733 | orchestrator | + source /opt/venv/bin/activate 2025-06-22 19:23:04.571702 | orchestrator | ++ deactivate nondestructive 2025-06-22 19:23:04.571735 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:23:04.571749 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:23:04.571784 | orchestrator | ++ hash -r 2025-06-22 19:23:04.571795 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:23:04.571806 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-22 19:23:04.571816 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-22 19:23:04.571827 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-22 19:23:04.571839 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-22 19:23:04.571849 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-22 19:23:04.571860 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-22 19:23:04.571871 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-22 19:23:04.571882 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:23:04.571893 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:23:04.571904 | orchestrator | ++ export PATH 2025-06-22 19:23:04.571915 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:23:04.571926 | orchestrator | ++ '[' -z '' ']' 2025-06-22 19:23:04.571936 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-22 19:23:04.571946 | orchestrator | ++ PS1='(venv) ' 2025-06-22 19:23:04.571957 | orchestrator | ++ export PS1 2025-06-22 19:23:04.571967 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-22 19:23:04.571978 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-22 19:23:04.571989 | orchestrator | ++ hash -r 2025-06-22 19:23:04.572000 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-06-22 19:23:05.558975 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-06-22 19:23:05.560012 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.4) 2025-06-22 19:23:05.561337 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-06-22 19:23:05.562663 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-06-22 19:23:05.564033 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-06-22 19:23:05.573673 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-06-22 19:23:05.575132 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-06-22 19:23:05.576484 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-06-22 19:23:05.577608 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-06-22 19:23:05.610265 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-06-22 19:23:05.611444 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-06-22 19:23:05.613251 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-06-22 19:23:05.614573 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.6.15) 2025-06-22 19:23:05.618613 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-06-22 19:23:05.819785 | orchestrator | ++ which gilt 2025-06-22 19:23:05.823026 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-06-22 19:23:05.823067 | orchestrator | + /opt/venv/bin/gilt overlay 2025-06-22 19:23:06.047155 | orchestrator | osism.cfg-generics: 2025-06-22 19:23:06.177725 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-06-22 19:23:06.178834 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-06-22 19:23:06.179774 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-06-22 19:23:06.179971 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-06-22 19:23:06.907307 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-06-22 19:23:06.917068 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-06-22 19:23:07.266685 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-06-22 19:23:07.319999 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 19:23:07.320063 | orchestrator | + deactivate 2025-06-22 19:23:07.320077 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-22 19:23:07.320090 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:23:07.320101 | orchestrator | + export PATH 2025-06-22 19:23:07.320112 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-22 19:23:07.320123 | orchestrator | + '[' -n '' ']' 2025-06-22 19:23:07.320136 | orchestrator | + hash -r 2025-06-22 19:23:07.320147 | orchestrator | + '[' -n '' ']' 2025-06-22 19:23:07.320158 | orchestrator | + unset VIRTUAL_ENV 2025-06-22 19:23:07.320168 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-22 19:23:07.320179 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-22 19:23:07.320190 | orchestrator | + unset -f deactivate 2025-06-22 19:23:07.320200 | orchestrator | + popd 2025-06-22 19:23:07.320220 | orchestrator | ~ 2025-06-22 19:23:07.321814 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-22 19:23:07.321899 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-22 19:23:07.322766 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-22 19:23:07.377570 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-22 19:23:07.377655 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-22 19:23:07.377670 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-22 19:23:07.467165 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 19:23:07.467280 | orchestrator | + source /opt/venv/bin/activate 2025-06-22 19:23:07.467294 | orchestrator | ++ deactivate nondestructive 2025-06-22 19:23:07.467317 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:23:07.467329 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:23:07.467339 | orchestrator | ++ hash -r 2025-06-22 19:23:07.467350 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:23:07.467361 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-22 19:23:07.467371 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-22 19:23:07.467413 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-22 19:23:07.467427 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-22 19:23:07.467438 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-22 19:23:07.467449 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-22 19:23:07.467489 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-22 19:23:07.467525 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:23:07.467539 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:23:07.467568 | orchestrator | ++ export PATH 2025-06-22 19:23:07.467579 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:23:07.467595 | orchestrator | ++ '[' -z '' ']' 2025-06-22 19:23:07.467606 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-22 19:23:07.467617 | orchestrator | ++ PS1='(venv) ' 2025-06-22 19:23:07.467627 | orchestrator | ++ export PS1 2025-06-22 19:23:07.467638 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-22 19:23:07.467648 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-22 19:23:07.467659 | orchestrator | ++ hash -r 2025-06-22 19:23:07.467670 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-22 19:23:08.583406 | orchestrator | 2025-06-22 19:23:08.583553 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-22 19:23:08.583573 | orchestrator | 2025-06-22 19:23:08.583593 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-22 19:23:09.154657 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:09.154832 | orchestrator | 2025-06-22 19:23:09.154850 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-22 19:23:10.168708 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:10.168802 | orchestrator | 2025-06-22 19:23:10.168818 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-22 19:23:10.168831 | orchestrator | 2025-06-22 19:23:10.168842 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:23:12.490369 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:12.490497 | orchestrator | 2025-06-22 19:23:12.490514 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-22 19:23:12.548368 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:12.548414 | orchestrator | 2025-06-22 19:23:12.548420 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-22 19:23:13.011099 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:13.011195 | orchestrator | 2025-06-22 19:23:13.011213 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-22 19:23:13.051331 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:23:13.051416 | orchestrator | 2025-06-22 19:23:13.051430 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-22 19:23:13.396259 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:13.396350 | orchestrator | 2025-06-22 19:23:13.396366 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-22 19:23:13.452628 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:23:13.452720 | orchestrator | 2025-06-22 19:23:13.452737 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-22 19:23:13.786781 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:13.786874 | orchestrator | 2025-06-22 19:23:13.786889 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-22 19:23:13.904186 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:23:13.904286 | orchestrator | 2025-06-22 19:23:13.904303 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-22 19:23:13.904316 | orchestrator | 2025-06-22 19:23:13.904327 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:23:15.718552 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:15.718635 | orchestrator | 2025-06-22 19:23:15.718651 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-22 19:23:15.847027 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-22 19:23:15.847138 | orchestrator | 2025-06-22 19:23:15.847163 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-22 19:23:15.902910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-22 19:23:15.902990 | orchestrator | 2025-06-22 19:23:15.903004 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-22 19:23:17.002295 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-22 19:23:17.002388 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-22 19:23:17.002405 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-22 19:23:17.002417 | orchestrator | 2025-06-22 19:23:17.002479 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-22 19:23:18.800235 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-22 19:23:18.800322 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-22 19:23:18.800337 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-22 19:23:18.800349 | orchestrator | 2025-06-22 19:23:18.800361 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-22 19:23:19.444868 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:23:19.444957 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:19.444972 | orchestrator | 2025-06-22 19:23:19.444984 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-22 19:23:20.094214 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:23:20.094281 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:20.094293 | orchestrator | 2025-06-22 19:23:20.094301 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-22 19:23:20.142654 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:23:20.142735 | orchestrator | 2025-06-22 19:23:20.142759 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-22 19:23:20.500363 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:20.500491 | orchestrator | 2025-06-22 19:23:20.500508 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-22 19:23:20.566586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-22 19:23:20.566675 | orchestrator | 2025-06-22 19:23:20.566698 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-22 19:23:21.647729 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:21.647819 | orchestrator | 2025-06-22 19:23:21.647836 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-22 19:23:22.464073 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:22.464295 | orchestrator | 2025-06-22 19:23:22.464326 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-22 19:23:34.123154 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:34.123253 | orchestrator | 2025-06-22 19:23:34.123289 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-22 19:23:34.167576 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:23:34.167669 | orchestrator | 2025-06-22 19:23:34.167684 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-22 19:23:34.167698 | orchestrator | 2025-06-22 19:23:34.167710 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:23:36.015075 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:36.015168 | orchestrator | 2025-06-22 19:23:36.015185 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-22 19:23:36.125957 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-22 19:23:36.126096 | orchestrator | 2025-06-22 19:23:36.126112 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-22 19:23:36.188272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:23:36.188352 | orchestrator | 2025-06-22 19:23:36.188365 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-22 19:23:38.679718 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:38.679825 | orchestrator | 2025-06-22 19:23:38.679841 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-22 19:23:38.726939 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:38.727002 | orchestrator | 2025-06-22 19:23:38.727016 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-22 19:23:38.862106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-22 19:23:38.862203 | orchestrator | 2025-06-22 19:23:38.862218 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-22 19:23:41.672114 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-22 19:23:41.672216 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-22 19:23:41.672231 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-22 19:23:41.672244 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-22 19:23:41.672255 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-22 19:23:41.672266 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-22 19:23:41.672277 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-22 19:23:41.672287 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-22 19:23:41.672298 | orchestrator | 2025-06-22 19:23:41.672313 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-22 19:23:42.306198 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:42.306319 | orchestrator | 2025-06-22 19:23:42.306347 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-22 19:23:42.954633 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:42.954727 | orchestrator | 2025-06-22 19:23:42.954742 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-22 19:23:43.033993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-22 19:23:43.034169 | orchestrator | 2025-06-22 19:23:43.034198 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-22 19:23:44.215139 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-22 19:23:44.215268 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-22 19:23:44.215285 | orchestrator | 2025-06-22 19:23:44.215298 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-22 19:23:44.825689 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:44.825800 | orchestrator | 2025-06-22 19:23:44.825815 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-22 19:23:44.893585 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:23:44.893666 | orchestrator | 2025-06-22 19:23:44.893677 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-22 19:23:44.955692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-22 19:23:44.955756 | orchestrator | 2025-06-22 19:23:44.955761 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-22 19:23:46.304274 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:23:46.304374 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:23:46.304389 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:46.304402 | orchestrator | 2025-06-22 19:23:46.304414 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-22 19:23:46.941986 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:46.942136 | orchestrator | 2025-06-22 19:23:46.942153 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-22 19:23:47.003596 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:23:47.003692 | orchestrator | 2025-06-22 19:23:47.003708 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-22 19:23:47.096352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-22 19:23:47.096420 | orchestrator | 2025-06-22 19:23:47.096434 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-22 19:23:47.641302 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:47.641453 | orchestrator | 2025-06-22 19:23:47.641494 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-22 19:23:48.033750 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:48.033843 | orchestrator | 2025-06-22 19:23:48.033859 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-22 19:23:49.251258 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-22 19:23:49.251370 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-22 19:23:49.252149 | orchestrator | 2025-06-22 19:23:49.252171 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-22 19:23:49.897178 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:49.897275 | orchestrator | 2025-06-22 19:23:49.897290 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-22 19:23:50.297885 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:50.297986 | orchestrator | 2025-06-22 19:23:50.298002 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-22 19:23:50.675577 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:50.675669 | orchestrator | 2025-06-22 19:23:50.675684 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-22 19:23:50.722687 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:23:50.722766 | orchestrator | 2025-06-22 19:23:50.722780 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-22 19:23:50.793533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-22 19:23:50.793629 | orchestrator | 2025-06-22 19:23:50.793645 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-22 19:23:50.848788 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:50.848902 | orchestrator | 2025-06-22 19:23:50.848929 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-22 19:23:52.846982 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-22 19:23:52.847115 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-22 19:23:52.847132 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-22 19:23:52.847144 | orchestrator | 2025-06-22 19:23:52.847157 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-22 19:23:53.555312 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:53.555408 | orchestrator | 2025-06-22 19:23:53.555424 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-22 19:23:54.268577 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:54.268706 | orchestrator | 2025-06-22 19:23:54.269447 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-22 19:23:54.991911 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:54.992008 | orchestrator | 2025-06-22 19:23:54.992024 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-22 19:23:55.064894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-22 19:23:55.064982 | orchestrator | 2025-06-22 19:23:55.064995 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-22 19:23:55.118381 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:55.118509 | orchestrator | 2025-06-22 19:23:55.118525 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-22 19:23:55.824142 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-22 19:23:55.824256 | orchestrator | 2025-06-22 19:23:55.824272 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-22 19:23:55.917747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-22 19:23:55.917827 | orchestrator | 2025-06-22 19:23:55.917840 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-22 19:23:56.599127 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:56.599223 | orchestrator | 2025-06-22 19:23:56.599238 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-22 19:23:57.214433 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:57.214552 | orchestrator | 2025-06-22 19:23:57.214568 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-22 19:23:57.269452 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:23:57.269571 | orchestrator | 2025-06-22 19:23:57.269589 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-22 19:23:57.327588 | orchestrator | ok: [testbed-manager] 2025-06-22 19:23:57.327673 | orchestrator | 2025-06-22 19:23:57.327689 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-22 19:23:58.136075 | orchestrator | changed: [testbed-manager] 2025-06-22 19:23:58.136194 | orchestrator | 2025-06-22 19:23:58.136214 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-22 19:25:02.471942 | orchestrator | changed: [testbed-manager] 2025-06-22 19:25:02.472057 | orchestrator | 2025-06-22 19:25:02.472072 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-22 19:25:03.461143 | orchestrator | ok: [testbed-manager] 2025-06-22 19:25:03.461225 | orchestrator | 2025-06-22 19:25:03.461235 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-22 19:25:03.525950 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:25:03.526101 | orchestrator | 2025-06-22 19:25:03.526116 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-22 19:25:05.967172 | orchestrator | changed: [testbed-manager] 2025-06-22 19:25:05.967277 | orchestrator | 2025-06-22 19:25:05.967294 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-22 19:25:06.024099 | orchestrator | ok: [testbed-manager] 2025-06-22 19:25:06.024187 | orchestrator | 2025-06-22 19:25:06.024201 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-22 19:25:06.024214 | orchestrator | 2025-06-22 19:25:06.024225 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-22 19:25:06.073096 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:25:06.073180 | orchestrator | 2025-06-22 19:25:06.073223 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-22 19:26:06.131340 | orchestrator | Pausing for 60 seconds 2025-06-22 19:26:06.131434 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:06.131443 | orchestrator | 2025-06-22 19:26:06.131450 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-22 19:26:09.673597 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:09.673711 | orchestrator | 2025-06-22 19:26:09.673730 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-22 19:26:51.406657 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-22 19:26:51.406783 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-22 19:26:51.406799 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:51.406813 | orchestrator | 2025-06-22 19:26:51.406825 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-22 19:27:00.075075 | orchestrator | changed: [testbed-manager] 2025-06-22 19:27:00.075200 | orchestrator | 2025-06-22 19:27:00.075236 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-22 19:27:00.152980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-22 19:27:00.153079 | orchestrator | 2025-06-22 19:27:00.153093 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-22 19:27:00.153106 | orchestrator | 2025-06-22 19:27:00.153117 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-22 19:27:00.210397 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:27:00.210473 | orchestrator | 2025-06-22 19:27:00.210482 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:27:00.210490 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-22 19:27:00.210496 | orchestrator | 2025-06-22 19:27:00.311735 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 19:27:00.311819 | orchestrator | + deactivate 2025-06-22 19:27:00.311831 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-22 19:27:00.311842 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:27:00.311850 | orchestrator | + export PATH 2025-06-22 19:27:00.311863 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-22 19:27:00.311871 | orchestrator | + '[' -n '' ']' 2025-06-22 19:27:00.311880 | orchestrator | + hash -r 2025-06-22 19:27:00.311888 | orchestrator | + '[' -n '' ']' 2025-06-22 19:27:00.311896 | orchestrator | + unset VIRTUAL_ENV 2025-06-22 19:27:00.311904 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-22 19:27:00.311912 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-22 19:27:00.311920 | orchestrator | + unset -f deactivate 2025-06-22 19:27:00.311929 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-22 19:27:00.319775 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-22 19:27:00.319818 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-22 19:27:00.319829 | orchestrator | + local max_attempts=60 2025-06-22 19:27:00.319840 | orchestrator | + local name=ceph-ansible 2025-06-22 19:27:00.319851 | orchestrator | + local attempt_num=1 2025-06-22 19:27:00.320776 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:27:00.353974 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:27:00.354103 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-22 19:27:00.354117 | orchestrator | + local max_attempts=60 2025-06-22 19:27:00.354129 | orchestrator | + local name=kolla-ansible 2025-06-22 19:27:00.354141 | orchestrator | + local attempt_num=1 2025-06-22 19:27:00.355019 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-22 19:27:00.396395 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:27:00.396481 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-22 19:27:00.396493 | orchestrator | + local max_attempts=60 2025-06-22 19:27:00.396504 | orchestrator | + local name=osism-ansible 2025-06-22 19:27:00.396514 | orchestrator | + local attempt_num=1 2025-06-22 19:27:00.397427 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-22 19:27:00.437844 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:27:00.437923 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-22 19:27:00.437934 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-22 19:27:01.215291 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-22 19:27:01.436639 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-22 19:27:01.436740 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-22 19:27:01.436757 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-22 19:27:01.436769 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-22 19:27:01.436782 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-22 19:27:01.436793 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-22 19:27:01.436804 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-22 19:27:01.436815 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-06-22 19:27:01.436825 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-22 19:27:01.436836 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-22 19:27:01.436846 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-22 19:27:01.436857 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-22 19:27:01.436867 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-22 19:27:01.436878 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-22 19:27:01.436888 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-22 19:27:01.443501 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-22 19:27:01.485727 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-22 19:27:01.485804 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-22 19:27:01.488639 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-22 19:27:03.222476 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:27:03.222659 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:27:03.222677 | orchestrator | Registering Redlock._release_script 2025-06-22 19:27:03.415027 | orchestrator | 2025-06-22 19:27:03 | INFO  | Task 3c37bd43-3eae-4a8b-98c9-4d07de2905f4 (resolvconf) was prepared for execution. 2025-06-22 19:27:03.415130 | orchestrator | 2025-06-22 19:27:03 | INFO  | It takes a moment until task 3c37bd43-3eae-4a8b-98c9-4d07de2905f4 (resolvconf) has been started and output is visible here. 2025-06-22 19:27:07.511002 | orchestrator | 2025-06-22 19:27:07.511097 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-22 19:27:07.512496 | orchestrator | 2025-06-22 19:27:07.514514 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:27:07.515430 | orchestrator | Sunday 22 June 2025 19:27:07 +0000 (0:00:00.146) 0:00:00.146 *********** 2025-06-22 19:27:11.332925 | orchestrator | ok: [testbed-manager] 2025-06-22 19:27:11.333180 | orchestrator | 2025-06-22 19:27:11.333785 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-22 19:27:11.334114 | orchestrator | Sunday 22 June 2025 19:27:11 +0000 (0:00:03.821) 0:00:03.968 *********** 2025-06-22 19:27:11.379756 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:27:11.380323 | orchestrator | 2025-06-22 19:27:11.380819 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-22 19:27:11.382178 | orchestrator | Sunday 22 June 2025 19:27:11 +0000 (0:00:00.052) 0:00:04.020 *********** 2025-06-22 19:27:11.454173 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-22 19:27:11.455465 | orchestrator | 2025-06-22 19:27:11.456519 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-22 19:27:11.457140 | orchestrator | Sunday 22 June 2025 19:27:11 +0000 (0:00:00.074) 0:00:04.095 *********** 2025-06-22 19:27:11.518328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:27:11.518485 | orchestrator | 2025-06-22 19:27:11.520139 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-22 19:27:11.521088 | orchestrator | Sunday 22 June 2025 19:27:11 +0000 (0:00:00.062) 0:00:04.157 *********** 2025-06-22 19:27:12.635126 | orchestrator | ok: [testbed-manager] 2025-06-22 19:27:12.635731 | orchestrator | 2025-06-22 19:27:12.635991 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-22 19:27:12.637033 | orchestrator | Sunday 22 June 2025 19:27:12 +0000 (0:00:01.114) 0:00:05.272 *********** 2025-06-22 19:27:12.712665 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:27:12.712794 | orchestrator | 2025-06-22 19:27:12.714209 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-22 19:27:12.715121 | orchestrator | Sunday 22 June 2025 19:27:12 +0000 (0:00:00.078) 0:00:05.351 *********** 2025-06-22 19:27:13.248109 | orchestrator | ok: [testbed-manager] 2025-06-22 19:27:13.248211 | orchestrator | 2025-06-22 19:27:13.250282 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-22 19:27:13.250775 | orchestrator | Sunday 22 June 2025 19:27:13 +0000 (0:00:00.536) 0:00:05.887 *********** 2025-06-22 19:27:13.369091 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:27:13.369250 | orchestrator | 2025-06-22 19:27:13.369859 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-22 19:27:13.373044 | orchestrator | Sunday 22 June 2025 19:27:13 +0000 (0:00:00.119) 0:00:06.006 *********** 2025-06-22 19:27:13.904121 | orchestrator | changed: [testbed-manager] 2025-06-22 19:27:13.905456 | orchestrator | 2025-06-22 19:27:13.906144 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-22 19:27:13.907354 | orchestrator | Sunday 22 June 2025 19:27:13 +0000 (0:00:00.537) 0:00:06.543 *********** 2025-06-22 19:27:15.061399 | orchestrator | changed: [testbed-manager] 2025-06-22 19:27:15.062438 | orchestrator | 2025-06-22 19:27:15.063657 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-22 19:27:15.065425 | orchestrator | Sunday 22 June 2025 19:27:15 +0000 (0:00:01.155) 0:00:07.699 *********** 2025-06-22 19:27:16.069360 | orchestrator | ok: [testbed-manager] 2025-06-22 19:27:16.070794 | orchestrator | 2025-06-22 19:27:16.070846 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-22 19:27:16.070977 | orchestrator | Sunday 22 June 2025 19:27:16 +0000 (0:00:01.008) 0:00:08.707 *********** 2025-06-22 19:27:16.151313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-22 19:27:16.151830 | orchestrator | 2025-06-22 19:27:16.153608 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-22 19:27:16.153785 | orchestrator | Sunday 22 June 2025 19:27:16 +0000 (0:00:00.082) 0:00:08.790 *********** 2025-06-22 19:27:17.395763 | orchestrator | changed: [testbed-manager] 2025-06-22 19:27:17.395873 | orchestrator | 2025-06-22 19:27:17.398114 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:27:17.399072 | orchestrator | 2025-06-22 19:27:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:27:17.399097 | orchestrator | 2025-06-22 19:27:17 | INFO  | Please wait and do not abort execution. 2025-06-22 19:27:17.399418 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:27:17.401205 | orchestrator | 2025-06-22 19:27:17.401433 | orchestrator | 2025-06-22 19:27:17.402587 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:27:17.404832 | orchestrator | Sunday 22 June 2025 19:27:17 +0000 (0:00:01.244) 0:00:10.035 *********** 2025-06-22 19:27:17.407438 | orchestrator | =============================================================================== 2025-06-22 19:27:17.408603 | orchestrator | Gathering Facts --------------------------------------------------------- 3.82s 2025-06-22 19:27:17.409609 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.24s 2025-06-22 19:27:17.410267 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.16s 2025-06-22 19:27:17.411421 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.11s 2025-06-22 19:27:17.412117 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.01s 2025-06-22 19:27:17.412969 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2025-06-22 19:27:17.413589 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2025-06-22 19:27:17.414231 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.12s 2025-06-22 19:27:17.414793 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-06-22 19:27:17.415439 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.08s 2025-06-22 19:27:17.416207 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-06-22 19:27:17.416650 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2025-06-22 19:27:17.417296 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-06-22 19:27:17.895943 | orchestrator | + osism apply sshconfig 2025-06-22 19:27:19.583497 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:27:19.583633 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:27:19.583650 | orchestrator | Registering Redlock._release_script 2025-06-22 19:27:19.640009 | orchestrator | 2025-06-22 19:27:19 | INFO  | Task 471c830e-14e1-44b9-9b45-1c6b0a8774ce (sshconfig) was prepared for execution. 2025-06-22 19:27:19.640093 | orchestrator | 2025-06-22 19:27:19 | INFO  | It takes a moment until task 471c830e-14e1-44b9-9b45-1c6b0a8774ce (sshconfig) has been started and output is visible here. 2025-06-22 19:27:23.769058 | orchestrator | 2025-06-22 19:27:23.769240 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-22 19:27:23.770086 | orchestrator | 2025-06-22 19:27:23.772286 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-22 19:27:23.773327 | orchestrator | Sunday 22 June 2025 19:27:23 +0000 (0:00:00.171) 0:00:00.171 *********** 2025-06-22 19:27:24.368339 | orchestrator | ok: [testbed-manager] 2025-06-22 19:27:24.369521 | orchestrator | 2025-06-22 19:27:24.370300 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-22 19:27:24.371385 | orchestrator | Sunday 22 June 2025 19:27:24 +0000 (0:00:00.602) 0:00:00.774 *********** 2025-06-22 19:27:24.863632 | orchestrator | changed: [testbed-manager] 2025-06-22 19:27:24.864048 | orchestrator | 2025-06-22 19:27:24.864921 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-22 19:27:24.865866 | orchestrator | Sunday 22 June 2025 19:27:24 +0000 (0:00:00.491) 0:00:01.265 *********** 2025-06-22 19:27:30.748710 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-22 19:27:30.748847 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-22 19:27:30.749075 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-22 19:27:30.749662 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-22 19:27:30.750007 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-22 19:27:30.751419 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-22 19:27:30.751493 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-22 19:27:30.751511 | orchestrator | 2025-06-22 19:27:30.751607 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-22 19:27:30.752122 | orchestrator | Sunday 22 June 2025 19:27:30 +0000 (0:00:05.887) 0:00:07.152 *********** 2025-06-22 19:27:30.816274 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:27:30.816769 | orchestrator | 2025-06-22 19:27:30.817515 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-22 19:27:30.819030 | orchestrator | Sunday 22 June 2025 19:27:30 +0000 (0:00:00.069) 0:00:07.222 *********** 2025-06-22 19:27:31.434458 | orchestrator | changed: [testbed-manager] 2025-06-22 19:27:31.434630 | orchestrator | 2025-06-22 19:27:31.434649 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:27:31.434743 | orchestrator | 2025-06-22 19:27:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:27:31.435252 | orchestrator | 2025-06-22 19:27:31 | INFO  | Please wait and do not abort execution. 2025-06-22 19:27:31.436194 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:27:31.436945 | orchestrator | 2025-06-22 19:27:31.437471 | orchestrator | 2025-06-22 19:27:31.437995 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:27:31.438641 | orchestrator | Sunday 22 June 2025 19:27:31 +0000 (0:00:00.616) 0:00:07.838 *********** 2025-06-22 19:27:31.438973 | orchestrator | =============================================================================== 2025-06-22 19:27:31.439480 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.89s 2025-06-22 19:27:31.440022 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.62s 2025-06-22 19:27:31.440583 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2025-06-22 19:27:31.441003 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-06-22 19:27:31.441446 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-06-22 19:27:32.028249 | orchestrator | + osism apply known-hosts 2025-06-22 19:27:33.899014 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:27:33.899149 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:27:33.899172 | orchestrator | Registering Redlock._release_script 2025-06-22 19:27:33.961055 | orchestrator | 2025-06-22 19:27:33 | INFO  | Task 2e8cc5ce-a0c0-4440-af00-40993442d48c (known-hosts) was prepared for execution. 2025-06-22 19:27:33.961141 | orchestrator | 2025-06-22 19:27:33 | INFO  | It takes a moment until task 2e8cc5ce-a0c0-4440-af00-40993442d48c (known-hosts) has been started and output is visible here. 2025-06-22 19:27:37.929647 | orchestrator | 2025-06-22 19:27:37.930149 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-22 19:27:37.930236 | orchestrator | 2025-06-22 19:27:37.930343 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-22 19:27:37.930774 | orchestrator | Sunday 22 June 2025 19:27:37 +0000 (0:00:00.128) 0:00:00.128 *********** 2025-06-22 19:27:43.687212 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-22 19:27:43.687649 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-22 19:27:43.687681 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-22 19:27:43.688535 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-22 19:27:43.690584 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-22 19:27:43.691728 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-22 19:27:43.692592 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-22 19:27:43.693602 | orchestrator | 2025-06-22 19:27:43.694480 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-22 19:27:43.695213 | orchestrator | Sunday 22 June 2025 19:27:43 +0000 (0:00:05.755) 0:00:05.883 *********** 2025-06-22 19:27:43.884420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-22 19:27:43.884995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-22 19:27:43.885818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-22 19:27:43.887935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-22 19:27:43.888314 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-22 19:27:43.888817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-22 19:27:43.889715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-22 19:27:43.889984 | orchestrator | 2025-06-22 19:27:43.890513 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:27:43.890947 | orchestrator | Sunday 22 June 2025 19:27:43 +0000 (0:00:00.198) 0:00:06.082 *********** 2025-06-22 19:27:45.191311 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFKCy/ohe88osO1+JcOhZgx33p0GHmiiwf+KZRXA0igl) 2025-06-22 19:27:45.192063 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpbkMphWCVUveROkdHO/9Dcol5C9jahD6BVMbQt1sfT+/XsboOnQpTV7VX2+K96m02Gs2CHDy2nsnf1+bNzxqDwUSm7ihdwLrNJoCDm6l9As3a/S/9a3vxqFiMc553RICR1KSKRU8EAIBDWgvu15h/xSCgsTZbhV3XTle5S2uJ38M6n3OHngqTm2yeNojiwj1N8m8bDI6nB+mzFPrHObRmLlGyGR5yh/yKCqP6KyehfTfJ7AiRv5UUjlkJONOiPh8SGn0+/nr6CMdEnPFdKKGtz39cn2EVeVu0Hdx1IQjMlnT9iVHTBvtUWO2PR6ZPNajLKL5y1Co5pVB1fewviRKoiY9K5G9r8FOxsjJwDq2tsJe5Q9JKMpVaIncgcpKzu2pZfkdDFz3IVDsc4T8EYAWV5iwUnKJ15zeU461ttxWw1A/tsyZNqe1H1NeKAH7IU8az34VFbCFaK+xC2blTxd113hjn15zNJ9vrUzA/QISg3P+1qSC2cLZKe9QbrLu7+5U=) 2025-06-22 19:27:45.192702 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH1TfTthltYSEAbL086IRNiKVhuxm/m/VUdkHcx11QdjL7AjI/KK11gn0FxBOCFN1jLxhXsUOUZiO7K677Yjr74=) 2025-06-22 19:27:45.193490 | orchestrator | 2025-06-22 19:27:45.194098 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:27:45.194686 | orchestrator | Sunday 22 June 2025 19:27:45 +0000 (0:00:01.306) 0:00:07.388 *********** 2025-06-22 19:27:46.359505 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMdX7u2b77a3bMfzNqiTGhGm3ycLy5ayj04yTE+dSRbd) 2025-06-22 19:27:46.362107 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKVbuP2gTlqtoaH5Ry86HEp8c+RXdja2Ik4TZ8M7CEo370DwKOqv2PWFpKI78o31MlO67fLh1jibfqtYckTQrQs8CrIPL7ZtHWKGO03Zy8rpQanwbLNuVTbeUOV0DkxzOEkP0ZrS7XK/f0j/aFh823Q2GtTt/n5jzBQ7O6NImD+rNUpwwo5t2kkUwm5AnSiGSLhmJRerY+hFrNRn9KdduYYzAvdPCYQ35ip4RIWSk02jIxP3jNmB1smIkHsq6m6mLpDH0HYnULpZf8Js4Kl7y8hvaKoLgZaWa6se1MDaPHPfcLx1gj1woXCvUhOLEGP4K1T4sHGG9jrNNr+Hkpa2uxVaMLdIYKf9uvITSy3CBO3dEn2+/SG8+RKD+UlwwDk7ngDVZtoDVD0+rkBv84fxgx5jJV/xfxfE4UDPPmjo2dGXvXfDO9NI7Oco98F+/bZ1QAF5OdOQHCfHiz5as2tECSTFNv4rzrs/Pwkb92Q3gw39kkXLovCJjBAyFTKXj0pZM=) 2025-06-22 19:27:46.362921 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBBtht86dGMTbAn8xDw/TdD/awCWveQN/tMKclNvnwop344bQx1JrU+Cao5It5wHZVrE00FjffhN3c0hUtPAcxM=) 2025-06-22 19:27:46.363605 | orchestrator | 2025-06-22 19:27:46.364531 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:27:46.366090 | orchestrator | Sunday 22 June 2025 19:27:46 +0000 (0:00:01.167) 0:00:08.556 *********** 2025-06-22 19:27:47.451844 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRQA2iraOGvW78N04grBkzoUJp/2dR3P/66leCcqvqnvvL6F8L1CjrgODMD8cQ/ZuWOTO+PUUwvgJYPpnY1zlDY/ZjP+Nq7UVchiz/woycjanS6Yvtuo9IV6TrUMKPmk1K4820V+PN2U0GrsEc6Y7zsMKtufNFoXu4rdldJnIoerQFBDwyW4USgf1Oghnlu0kfGVMuRcOVWwJT5+DyvQwjJ7A8td3igfBRjorWT/pD6cdZdvT/lZgdmoUJ13/Hzll6LupK37tomPqWqPS8dV2pDB6MqeCagz64sjp3847ODK7kBDKYH3N4Ji8f2ebkkgy7RM7FVHVH55q7qe8OQSuWmFPgxht/Lgi2yGzvVNQ93WXNfGcPJVD7GbPkquMOpE+k5cbRof52jdDYoLcYCEQOJQtAxxAobZqBDiLNvxK1eKr9BwEdk89Xs37HlQGHPgpx/xCarzd0au+RFt+fHrFKNwp/u6m4n2TyuB5C91BhkGGmAx0vAK6hQYTdFjA1mDU=) 2025-06-22 19:27:47.452168 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCJ+o8jRdP7/k8q82dJC735GBaoPXAP67n0YnBgsNjCEafHXu7i+1UAwELuDQJJIQjL6eHVwW58ZheevNjHJ/ZU=) 2025-06-22 19:27:47.453068 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAE8UmY9RVsu9LMwru11RdNeZR8FvqWC6dCQf2d7AtO7) 2025-06-22 19:27:47.453809 | orchestrator | 2025-06-22 19:27:47.454316 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:27:47.455229 | orchestrator | Sunday 22 June 2025 19:27:47 +0000 (0:00:01.093) 0:00:09.649 *********** 2025-06-22 19:27:48.521683 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGoiir3mEG2ohZNtzVTq1f3UtYpcxV9wqyedplh4FiUh) 2025-06-22 19:27:48.523282 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7ajErV/50eS3zzJ6n/VFIwpp0tZ1XPQ5gD4LJfOrlkQ0h7En3GpfT/PqoCeMdyK7t0y4rFMrC/9ULq5UANetH6DT0WpD8h70d5Onxe6tBbupNCE1izpwnqvPPtX62wJFSDEdVqwcvQnqNiuC4MRmKmJytKXCURva6Wfd5uk/yYcBxZvAivQ71dyxvq6A4CwtZyXamiiw2mb8lGOoGQ9suXDf2j8lDTUfP1QdgTj1Fs7jRDGXDQFZUdXxncKTjoB6O4qIBBxlHtytkZwrGqLMe3hUFQ8QQE+yuLxGW5tqFiTYxwV4rJElHqYEVR1tOPGPm3yLtF6mL7ItgrzZpXHncda1Njp2P8WkklyfHR/aJZ9un0kffDqjxfhmcoM8C8Yj3gNiTgggMpfvslVol6Z9O6MgAahx2hxjjLTiqfiVUGnx972cYRufm6HdfOTQkaf5ekM79ZR0+0jO+3chH99sg8vytr+KNp1yjeGiYNMwV4NRuJFKEHOjSxhUUOxsi7Y8=) 2025-06-22 19:27:48.523415 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKZ6B43c9heRqkGfaEYCZ/uXQfTNpR2i1u0hh0M+bQe/OmVyDK3p433UH/Rns51kkcOfCslGJhaXGzWUH3uumg4=) 2025-06-22 19:27:48.523806 | orchestrator | 2025-06-22 19:27:48.524408 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:27:48.524794 | orchestrator | Sunday 22 June 2025 19:27:48 +0000 (0:00:01.071) 0:00:10.721 *********** 2025-06-22 19:27:49.501072 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCve+B9AfoJwA5h9x9CwbrO4m1z0qUmbqwfgh9wBplrwBivXv2RwcBWrvVMg7dJVyO2IrKPafDdqbNk8ZktQqZsvxloiDhcnS9YC7f2KcnA9h13f9/brcCIvfiwQCNrf84h9s4ZOml7RqlFATYqw8szXeKf2+jofoPCMu0GAtt9cyO5abMXxfnz1p84Knuskl4B+fwXD7cNVBixJYinxXaxve5PudlTSeElvdPQfRI9vn2KjCppOS/eFccoajSj4qApASSJL4SuAjmpilXGKIzx0w6kuz/MW52sQLvXUpolTN64EaMCNm4t34XfVMf9QfSbP0F4Hpmf/qWwRBRUM+tSBEyxoCjV6gcoj0mkmdlvNEIrL/zL2v5RgcEsC51E06XztzpFEjtqk3PCWyMj4wLgekc+2x0K2z9kQxan6ujpDewklZ4ZfJfbcnKvhRlif6GUdo6dzwVBhTI5kn9fwZDpowOa+c3WDRRUp03Yu/r1jGa9KGQQB42MwVRNF4iS2SE=) 2025-06-22 19:27:49.501941 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCfUq0O8o8LRuQiZaenVQ3/fnhdkBHcad1W3KCzgzS9cgExFYrEy3uKYIhhm8UqkWBF2/KbH1cL28+4OXQncyIE=) 2025-06-22 19:27:49.502132 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP3R1n+ND1gsP3CzFfw6D/REu69A5hbzgItWC+PJpBq5) 2025-06-22 19:27:49.502830 | orchestrator | 2025-06-22 19:27:49.503161 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:27:49.503571 | orchestrator | Sunday 22 June 2025 19:27:49 +0000 (0:00:00.977) 0:00:11.699 *********** 2025-06-22 19:27:50.471753 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChpCW96hLTM4N6PKHTsx85Py/dp6PFbWdyHn2o5kDOFo9TnlStZk1Yl5Usoh4tHXe/ad1embCTWz9hI2+lLABdJjL+h73GARSC50+ICR578LhP6vzpGz2dsDS8BW/PbUqjSQ54rU9S9uxDc7QnZRRLE9UqD47IYFLq6vPQjGYueIpNQVzjPruNYyRK+7uwnER9FA/iVimnBubBGpkMVhDsCVCKUPgZO1HlsWaH+lXom99Fsr67HoCb6C4KH4xxadz6BDeuv9lLsVKLaVlNv4gUScKVAo1NfCtZvCTdCPOAQGvIxqn0OsqDWa5rflPEocZhVvR22rNbBMlJ7PMQORrVTV5f3XNF5uO2ABbUtQ7M6nm5MOnK5WAGEU+6aiUxNkQutAbzUpWfQW3MV/BH6F2hqX2WpwlloF4UL3qzVvC7duRKYQRZH+6PJY+RNsq28+nQ6oDnEmNh56khorRxVwU8eiaDZqUYmL4ctG7WO3Q1y1FNYTDMDK0PbuTYP4bTYiE=) 2025-06-22 19:27:50.472799 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBpZLcZgZMINssvFvBlFoUl670C3e+5BjWjysWLWVbpZ) 2025-06-22 19:27:50.472982 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEMXG11AC0onoQKHdwMllIw7EQluHUj+a5rqzcWbFtDTWrWFJVIYUprJ3lHsoXb4TYLPaezaeb2vXJaRwpgZfbo=) 2025-06-22 19:27:50.474079 | orchestrator | 2025-06-22 19:27:50.474950 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:27:50.475747 | orchestrator | Sunday 22 June 2025 19:27:50 +0000 (0:00:00.971) 0:00:12.670 *********** 2025-06-22 19:27:51.469006 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfyL7+emi2N9oP8DEus4VtdBo4TuZKASXaS95GOc0ZEWlmgWRgFYpWeTCU0KNcLtU+GzTVdBvpoxXEUFTvZx0aoZmoRS0xwKdZJh2qsZf6Jbs7TCtkHPil9r8sPxVXAVRw9DDa4aRH0RsAw+Yyp3S8i2nHSfOYLHsKJY3RXeCMbJQyM5BnCNEOOa02L06zAUuBq+UY1kuZmsT9IiLPkxZXdJga7XV7LwPamDDCAcG9QdT9EJrQLgHMhS6RmW7KGjNbzOHDpKO1OAdG18vvqlAk5QNl2+h/vo5GoA5rwZi9wMH5zffiRRWLLYUfJ200+M1AX2/wORHpsSyi6daQTX8VbgQVcuZCBt2y2xoRD+2YG+UigUSJ/8xD48l4wNCeh7GKFsDU/JacKiuhTecNOPKVjGrjwblWyUccaE/KBPi9UR9SzBjHDV/2Aeirs2m7M5P+oiVf10n6ZocshZzN4pwnWzPP90wpPs+deFpny84pTs53azzyLH4dhWjvNtkMiJ8=) 2025-06-22 19:27:51.470133 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEF1UK2/a/9NtUv298qqoOlITJux4Co3FLI2UBmpX5nolvA8skjmJkuIm47syE5zm9FYBoBGlfc03mdxuRG/p/M=) 2025-06-22 19:27:51.470911 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGjXug3VPEAMd/sIRhLe09THgB1jUjfFjhnGnnXIfn0w) 2025-06-22 19:27:51.471593 | orchestrator | 2025-06-22 19:27:51.472358 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-22 19:27:51.473104 | orchestrator | Sunday 22 June 2025 19:27:51 +0000 (0:00:00.997) 0:00:13.668 *********** 2025-06-22 19:27:56.495940 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-22 19:27:56.497096 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-22 19:27:56.497533 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-22 19:27:56.499725 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-22 19:27:56.500650 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-22 19:27:56.501515 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-22 19:27:56.502205 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-22 19:27:56.503263 | orchestrator | 2025-06-22 19:27:56.504075 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-22 19:27:56.504727 | orchestrator | Sunday 22 June 2025 19:27:56 +0000 (0:00:05.023) 0:00:18.691 *********** 2025-06-22 19:27:56.653966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-22 19:27:56.654515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-22 19:27:56.656252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-22 19:27:56.657047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-22 19:27:56.657792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-22 19:27:56.658665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-22 19:27:56.659343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-22 19:27:56.660220 | orchestrator | 2025-06-22 19:27:56.660885 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:27:56.661343 | orchestrator | Sunday 22 June 2025 19:27:56 +0000 (0:00:00.161) 0:00:18.852 *********** 2025-06-22 19:27:57.831193 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFKCy/ohe88osO1+JcOhZgx33p0GHmiiwf+KZRXA0igl) 2025-06-22 19:27:57.831654 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpbkMphWCVUveROkdHO/9Dcol5C9jahD6BVMbQt1sfT+/XsboOnQpTV7VX2+K96m02Gs2CHDy2nsnf1+bNzxqDwUSm7ihdwLrNJoCDm6l9As3a/S/9a3vxqFiMc553RICR1KSKRU8EAIBDWgvu15h/xSCgsTZbhV3XTle5S2uJ38M6n3OHngqTm2yeNojiwj1N8m8bDI6nB+mzFPrHObRmLlGyGR5yh/yKCqP6KyehfTfJ7AiRv5UUjlkJONOiPh8SGn0+/nr6CMdEnPFdKKGtz39cn2EVeVu0Hdx1IQjMlnT9iVHTBvtUWO2PR6ZPNajLKL5y1Co5pVB1fewviRKoiY9K5G9r8FOxsjJwDq2tsJe5Q9JKMpVaIncgcpKzu2pZfkdDFz3IVDsc4T8EYAWV5iwUnKJ15zeU461ttxWw1A/tsyZNqe1H1NeKAH7IU8az34VFbCFaK+xC2blTxd113hjn15zNJ9vrUzA/QISg3P+1qSC2cLZKe9QbrLu7+5U=) 2025-06-22 19:27:57.832975 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH1TfTthltYSEAbL086IRNiKVhuxm/m/VUdkHcx11QdjL7AjI/KK11gn0FxBOCFN1jLxhXsUOUZiO7K677Yjr74=) 2025-06-22 19:27:57.833948 | orchestrator | 2025-06-22 19:27:57.834754 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:27:57.835850 | orchestrator | Sunday 22 June 2025 19:27:57 +0000 (0:00:01.175) 0:00:20.028 *********** 2025-06-22 19:27:58.876994 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKVbuP2gTlqtoaH5Ry86HEp8c+RXdja2Ik4TZ8M7CEo370DwKOqv2PWFpKI78o31MlO67fLh1jibfqtYckTQrQs8CrIPL7ZtHWKGO03Zy8rpQanwbLNuVTbeUOV0DkxzOEkP0ZrS7XK/f0j/aFh823Q2GtTt/n5jzBQ7O6NImD+rNUpwwo5t2kkUwm5AnSiGSLhmJRerY+hFrNRn9KdduYYzAvdPCYQ35ip4RIWSk02jIxP3jNmB1smIkHsq6m6mLpDH0HYnULpZf8Js4Kl7y8hvaKoLgZaWa6se1MDaPHPfcLx1gj1woXCvUhOLEGP4K1T4sHGG9jrNNr+Hkpa2uxVaMLdIYKf9uvITSy3CBO3dEn2+/SG8+RKD+UlwwDk7ngDVZtoDVD0+rkBv84fxgx5jJV/xfxfE4UDPPmjo2dGXvXfDO9NI7Oco98F+/bZ1QAF5OdOQHCfHiz5as2tECSTFNv4rzrs/Pwkb92Q3gw39kkXLovCJjBAyFTKXj0pZM=) 2025-06-22 19:27:58.877878 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBBtht86dGMTbAn8xDw/TdD/awCWveQN/tMKclNvnwop344bQx1JrU+Cao5It5wHZVrE00FjffhN3c0hUtPAcxM=) 2025-06-22 19:27:58.878879 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMdX7u2b77a3bMfzNqiTGhGm3ycLy5ayj04yTE+dSRbd) 2025-06-22 19:27:58.879734 | orchestrator | 2025-06-22 19:27:58.880634 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:27:58.881482 | orchestrator | Sunday 22 June 2025 19:27:58 +0000 (0:00:01.045) 0:00:21.074 *********** 2025-06-22 19:27:59.982493 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRQA2iraOGvW78N04grBkzoUJp/2dR3P/66leCcqvqnvvL6F8L1CjrgODMD8cQ/ZuWOTO+PUUwvgJYPpnY1zlDY/ZjP+Nq7UVchiz/woycjanS6Yvtuo9IV6TrUMKPmk1K4820V+PN2U0GrsEc6Y7zsMKtufNFoXu4rdldJnIoerQFBDwyW4USgf1Oghnlu0kfGVMuRcOVWwJT5+DyvQwjJ7A8td3igfBRjorWT/pD6cdZdvT/lZgdmoUJ13/Hzll6LupK37tomPqWqPS8dV2pDB6MqeCagz64sjp3847ODK7kBDKYH3N4Ji8f2ebkkgy7RM7FVHVH55q7qe8OQSuWmFPgxht/Lgi2yGzvVNQ93WXNfGcPJVD7GbPkquMOpE+k5cbRof52jdDYoLcYCEQOJQtAxxAobZqBDiLNvxK1eKr9BwEdk89Xs37HlQGHPgpx/xCarzd0au+RFt+fHrFKNwp/u6m4n2TyuB5C91BhkGGmAx0vAK6hQYTdFjA1mDU=) 2025-06-22 19:27:59.984837 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCJ+o8jRdP7/k8q82dJC735GBaoPXAP67n0YnBgsNjCEafHXu7i+1UAwELuDQJJIQjL6eHVwW58ZheevNjHJ/ZU=) 2025-06-22 19:27:59.985892 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAE8UmY9RVsu9LMwru11RdNeZR8FvqWC6dCQf2d7AtO7) 2025-06-22 19:27:59.987349 | orchestrator | 2025-06-22 19:27:59.988163 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:27:59.989410 | orchestrator | Sunday 22 June 2025 19:27:59 +0000 (0:00:01.105) 0:00:22.180 *********** 2025-06-22 19:28:01.077381 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7ajErV/50eS3zzJ6n/VFIwpp0tZ1XPQ5gD4LJfOrlkQ0h7En3GpfT/PqoCeMdyK7t0y4rFMrC/9ULq5UANetH6DT0WpD8h70d5Onxe6tBbupNCE1izpwnqvPPtX62wJFSDEdVqwcvQnqNiuC4MRmKmJytKXCURva6Wfd5uk/yYcBxZvAivQ71dyxvq6A4CwtZyXamiiw2mb8lGOoGQ9suXDf2j8lDTUfP1QdgTj1Fs7jRDGXDQFZUdXxncKTjoB6O4qIBBxlHtytkZwrGqLMe3hUFQ8QQE+yuLxGW5tqFiTYxwV4rJElHqYEVR1tOPGPm3yLtF6mL7ItgrzZpXHncda1Njp2P8WkklyfHR/aJZ9un0kffDqjxfhmcoM8C8Yj3gNiTgggMpfvslVol6Z9O6MgAahx2hxjjLTiqfiVUGnx972cYRufm6HdfOTQkaf5ekM79ZR0+0jO+3chH99sg8vytr+KNp1yjeGiYNMwV4NRuJFKEHOjSxhUUOxsi7Y8=) 2025-06-22 19:28:01.077684 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKZ6B43c9heRqkGfaEYCZ/uXQfTNpR2i1u0hh0M+bQe/OmVyDK3p433UH/Rns51kkcOfCslGJhaXGzWUH3uumg4=) 2025-06-22 19:28:01.078618 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGoiir3mEG2ohZNtzVTq1f3UtYpcxV9wqyedplh4FiUh) 2025-06-22 19:28:01.079194 | orchestrator | 2025-06-22 19:28:01.079945 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:01.081142 | orchestrator | Sunday 22 June 2025 19:28:01 +0000 (0:00:01.091) 0:00:23.272 *********** 2025-06-22 19:28:02.213942 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCve+B9AfoJwA5h9x9CwbrO4m1z0qUmbqwfgh9wBplrwBivXv2RwcBWrvVMg7dJVyO2IrKPafDdqbNk8ZktQqZsvxloiDhcnS9YC7f2KcnA9h13f9/brcCIvfiwQCNrf84h9s4ZOml7RqlFATYqw8szXeKf2+jofoPCMu0GAtt9cyO5abMXxfnz1p84Knuskl4B+fwXD7cNVBixJYinxXaxve5PudlTSeElvdPQfRI9vn2KjCppOS/eFccoajSj4qApASSJL4SuAjmpilXGKIzx0w6kuz/MW52sQLvXUpolTN64EaMCNm4t34XfVMf9QfSbP0F4Hpmf/qWwRBRUM+tSBEyxoCjV6gcoj0mkmdlvNEIrL/zL2v5RgcEsC51E06XztzpFEjtqk3PCWyMj4wLgekc+2x0K2z9kQxan6ujpDewklZ4ZfJfbcnKvhRlif6GUdo6dzwVBhTI5kn9fwZDpowOa+c3WDRRUp03Yu/r1jGa9KGQQB42MwVRNF4iS2SE=) 2025-06-22 19:28:02.214205 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCfUq0O8o8LRuQiZaenVQ3/fnhdkBHcad1W3KCzgzS9cgExFYrEy3uKYIhhm8UqkWBF2/KbH1cL28+4OXQncyIE=) 2025-06-22 19:28:02.214985 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP3R1n+ND1gsP3CzFfw6D/REu69A5hbzgItWC+PJpBq5) 2025-06-22 19:28:02.215584 | orchestrator | 2025-06-22 19:28:02.216456 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:02.217006 | orchestrator | Sunday 22 June 2025 19:28:02 +0000 (0:00:01.138) 0:00:24.410 *********** 2025-06-22 19:28:03.316119 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChpCW96hLTM4N6PKHTsx85Py/dp6PFbWdyHn2o5kDOFo9TnlStZk1Yl5Usoh4tHXe/ad1embCTWz9hI2+lLABdJjL+h73GARSC50+ICR578LhP6vzpGz2dsDS8BW/PbUqjSQ54rU9S9uxDc7QnZRRLE9UqD47IYFLq6vPQjGYueIpNQVzjPruNYyRK+7uwnER9FA/iVimnBubBGpkMVhDsCVCKUPgZO1HlsWaH+lXom99Fsr67HoCb6C4KH4xxadz6BDeuv9lLsVKLaVlNv4gUScKVAo1NfCtZvCTdCPOAQGvIxqn0OsqDWa5rflPEocZhVvR22rNbBMlJ7PMQORrVTV5f3XNF5uO2ABbUtQ7M6nm5MOnK5WAGEU+6aiUxNkQutAbzUpWfQW3MV/BH6F2hqX2WpwlloF4UL3qzVvC7duRKYQRZH+6PJY+RNsq28+nQ6oDnEmNh56khorRxVwU8eiaDZqUYmL4ctG7WO3Q1y1FNYTDMDK0PbuTYP4bTYiE=) 2025-06-22 19:28:03.316536 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEMXG11AC0onoQKHdwMllIw7EQluHUj+a5rqzcWbFtDTWrWFJVIYUprJ3lHsoXb4TYLPaezaeb2vXJaRwpgZfbo=) 2025-06-22 19:28:03.316626 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBpZLcZgZMINssvFvBlFoUl670C3e+5BjWjysWLWVbpZ) 2025-06-22 19:28:03.318505 | orchestrator | 2025-06-22 19:28:03.319141 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:03.319825 | orchestrator | Sunday 22 June 2025 19:28:03 +0000 (0:00:01.102) 0:00:25.512 *********** 2025-06-22 19:28:04.406199 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfyL7+emi2N9oP8DEus4VtdBo4TuZKASXaS95GOc0ZEWlmgWRgFYpWeTCU0KNcLtU+GzTVdBvpoxXEUFTvZx0aoZmoRS0xwKdZJh2qsZf6Jbs7TCtkHPil9r8sPxVXAVRw9DDa4aRH0RsAw+Yyp3S8i2nHSfOYLHsKJY3RXeCMbJQyM5BnCNEOOa02L06zAUuBq+UY1kuZmsT9IiLPkxZXdJga7XV7LwPamDDCAcG9QdT9EJrQLgHMhS6RmW7KGjNbzOHDpKO1OAdG18vvqlAk5QNl2+h/vo5GoA5rwZi9wMH5zffiRRWLLYUfJ200+M1AX2/wORHpsSyi6daQTX8VbgQVcuZCBt2y2xoRD+2YG+UigUSJ/8xD48l4wNCeh7GKFsDU/JacKiuhTecNOPKVjGrjwblWyUccaE/KBPi9UR9SzBjHDV/2Aeirs2m7M5P+oiVf10n6ZocshZzN4pwnWzPP90wpPs+deFpny84pTs53azzyLH4dhWjvNtkMiJ8=) 2025-06-22 19:28:04.406312 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEF1UK2/a/9NtUv298qqoOlITJux4Co3FLI2UBmpX5nolvA8skjmJkuIm47syE5zm9FYBoBGlfc03mdxuRG/p/M=) 2025-06-22 19:28:04.406451 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGjXug3VPEAMd/sIRhLe09THgB1jUjfFjhnGnnXIfn0w) 2025-06-22 19:28:04.407758 | orchestrator | 2025-06-22 19:28:04.408739 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-22 19:28:04.409790 | orchestrator | Sunday 22 June 2025 19:28:04 +0000 (0:00:01.088) 0:00:26.601 *********** 2025-06-22 19:28:04.573144 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-22 19:28:04.574169 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-22 19:28:04.575742 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-22 19:28:04.575776 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-22 19:28:04.576375 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-22 19:28:04.577132 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-22 19:28:04.577857 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-22 19:28:04.578260 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:28:04.578659 | orchestrator | 2025-06-22 19:28:04.579341 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-22 19:28:04.579619 | orchestrator | Sunday 22 June 2025 19:28:04 +0000 (0:00:00.169) 0:00:26.771 *********** 2025-06-22 19:28:04.630946 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:28:04.631123 | orchestrator | 2025-06-22 19:28:04.631881 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-22 19:28:04.632749 | orchestrator | Sunday 22 June 2025 19:28:04 +0000 (0:00:00.058) 0:00:26.829 *********** 2025-06-22 19:28:04.698519 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:28:04.699360 | orchestrator | 2025-06-22 19:28:04.700409 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-22 19:28:04.701011 | orchestrator | Sunday 22 June 2025 19:28:04 +0000 (0:00:00.066) 0:00:26.896 *********** 2025-06-22 19:28:05.262251 | orchestrator | changed: [testbed-manager] 2025-06-22 19:28:05.262738 | orchestrator | 2025-06-22 19:28:05.264200 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:28:05.264417 | orchestrator | 2025-06-22 19:28:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:28:05.264483 | orchestrator | 2025-06-22 19:28:05 | INFO  | Please wait and do not abort execution. 2025-06-22 19:28:05.265567 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:28:05.266705 | orchestrator | 2025-06-22 19:28:05.267835 | orchestrator | 2025-06-22 19:28:05.269012 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:28:05.269705 | orchestrator | Sunday 22 June 2025 19:28:05 +0000 (0:00:00.565) 0:00:27.461 *********** 2025-06-22 19:28:05.271303 | orchestrator | =============================================================================== 2025-06-22 19:28:05.273034 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.76s 2025-06-22 19:28:05.273874 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.02s 2025-06-22 19:28:05.274677 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.31s 2025-06-22 19:28:05.275443 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-06-22 19:28:05.276179 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-06-22 19:28:05.276892 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-06-22 19:28:05.277675 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-06-22 19:28:05.278371 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-06-22 19:28:05.279042 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-22 19:28:05.279538 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-22 19:28:05.280315 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-22 19:28:05.280816 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-22 19:28:05.281474 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-22 19:28:05.281951 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-06-22 19:28:05.282771 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-06-22 19:28:05.283307 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-06-22 19:28:05.285164 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.57s 2025-06-22 19:28:05.285371 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.20s 2025-06-22 19:28:05.285908 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-06-22 19:28:05.286720 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-06-22 19:28:05.899990 | orchestrator | + osism apply squid 2025-06-22 19:28:07.585888 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:28:07.585979 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:28:07.586013 | orchestrator | Registering Redlock._release_script 2025-06-22 19:28:07.639735 | orchestrator | 2025-06-22 19:28:07 | INFO  | Task 35679ae2-034c-47d7-9a9d-0906ac632c8c (squid) was prepared for execution. 2025-06-22 19:28:07.639778 | orchestrator | 2025-06-22 19:28:07 | INFO  | It takes a moment until task 35679ae2-034c-47d7-9a9d-0906ac632c8c (squid) has been started and output is visible here. 2025-06-22 19:28:11.293227 | orchestrator | 2025-06-22 19:28:11.293746 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-22 19:28:11.294696 | orchestrator | 2025-06-22 19:28:11.294896 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-22 19:28:11.296711 | orchestrator | Sunday 22 June 2025 19:28:11 +0000 (0:00:00.152) 0:00:00.152 *********** 2025-06-22 19:28:11.380128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:28:11.380200 | orchestrator | 2025-06-22 19:28:11.380667 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-22 19:28:11.381393 | orchestrator | Sunday 22 June 2025 19:28:11 +0000 (0:00:00.090) 0:00:00.243 *********** 2025-06-22 19:28:12.617893 | orchestrator | ok: [testbed-manager] 2025-06-22 19:28:12.618953 | orchestrator | 2025-06-22 19:28:12.619621 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-22 19:28:12.620077 | orchestrator | Sunday 22 June 2025 19:28:12 +0000 (0:00:01.236) 0:00:01.479 *********** 2025-06-22 19:28:13.784890 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-22 19:28:13.785934 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-22 19:28:13.786219 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-22 19:28:13.788317 | orchestrator | 2025-06-22 19:28:13.788927 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-22 19:28:13.789474 | orchestrator | Sunday 22 June 2025 19:28:13 +0000 (0:00:01.166) 0:00:02.646 *********** 2025-06-22 19:28:14.873049 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-22 19:28:14.873345 | orchestrator | 2025-06-22 19:28:14.874138 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-22 19:28:14.875172 | orchestrator | Sunday 22 June 2025 19:28:14 +0000 (0:00:01.087) 0:00:03.733 *********** 2025-06-22 19:28:15.215215 | orchestrator | ok: [testbed-manager] 2025-06-22 19:28:15.215679 | orchestrator | 2025-06-22 19:28:15.215711 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-22 19:28:15.215843 | orchestrator | Sunday 22 June 2025 19:28:15 +0000 (0:00:00.345) 0:00:04.078 *********** 2025-06-22 19:28:16.017664 | orchestrator | changed: [testbed-manager] 2025-06-22 19:28:16.018633 | orchestrator | 2025-06-22 19:28:16.019521 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-22 19:28:16.020534 | orchestrator | Sunday 22 June 2025 19:28:16 +0000 (0:00:00.800) 0:00:04.879 *********** 2025-06-22 19:28:48.507248 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-22 19:28:48.507325 | orchestrator | ok: [testbed-manager] 2025-06-22 19:28:48.507335 | orchestrator | 2025-06-22 19:28:48.507343 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-22 19:28:48.507716 | orchestrator | Sunday 22 June 2025 19:28:48 +0000 (0:00:32.485) 0:00:37.365 *********** 2025-06-22 19:29:00.856042 | orchestrator | changed: [testbed-manager] 2025-06-22 19:29:00.856188 | orchestrator | 2025-06-22 19:29:00.856267 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-22 19:29:00.859347 | orchestrator | Sunday 22 June 2025 19:29:00 +0000 (0:00:12.349) 0:00:49.714 *********** 2025-06-22 19:30:00.927312 | orchestrator | Pausing for 60 seconds 2025-06-22 19:30:00.927430 | orchestrator | changed: [testbed-manager] 2025-06-22 19:30:00.927452 | orchestrator | 2025-06-22 19:30:00.927473 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-22 19:30:00.927640 | orchestrator | Sunday 22 June 2025 19:30:00 +0000 (0:01:00.069) 0:01:49.784 *********** 2025-06-22 19:30:00.985614 | orchestrator | ok: [testbed-manager] 2025-06-22 19:30:00.985971 | orchestrator | 2025-06-22 19:30:00.987971 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-22 19:30:00.988018 | orchestrator | Sunday 22 June 2025 19:30:00 +0000 (0:00:00.064) 0:01:49.848 *********** 2025-06-22 19:30:01.577460 | orchestrator | changed: [testbed-manager] 2025-06-22 19:30:01.577610 | orchestrator | 2025-06-22 19:30:01.577699 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:30:01.578006 | orchestrator | 2025-06-22 19:30:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:30:01.578082 | orchestrator | 2025-06-22 19:30:01 | INFO  | Please wait and do not abort execution. 2025-06-22 19:30:01.578728 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:30:01.578953 | orchestrator | 2025-06-22 19:30:01.579768 | orchestrator | 2025-06-22 19:30:01.580456 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:30:01.580999 | orchestrator | Sunday 22 June 2025 19:30:01 +0000 (0:00:00.589) 0:01:50.438 *********** 2025-06-22 19:30:01.581429 | orchestrator | =============================================================================== 2025-06-22 19:30:01.582312 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-06-22 19:30:01.583164 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.49s 2025-06-22 19:30:01.584386 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.35s 2025-06-22 19:30:01.585256 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.24s 2025-06-22 19:30:01.586088 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2025-06-22 19:30:01.587000 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2025-06-22 19:30:01.587388 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.80s 2025-06-22 19:30:01.588250 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2025-06-22 19:30:01.589072 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-06-22 19:30:01.589465 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-06-22 19:30:01.589979 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-06-22 19:30:02.079505 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-22 19:30:02.079618 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-06-22 19:30:02.082470 | orchestrator | ++ semver 9.1.0 9.0.0 2025-06-22 19:30:02.137265 | orchestrator | + [[ 1 -lt 0 ]] 2025-06-22 19:30:02.137419 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-22 19:30:03.827339 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:30:03.827437 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:30:03.827452 | orchestrator | Registering Redlock._release_script 2025-06-22 19:30:03.885985 | orchestrator | 2025-06-22 19:30:03 | INFO  | Task b0a0b125-5eea-4d9a-9cf7-16c1c001cfbb (operator) was prepared for execution. 2025-06-22 19:30:03.886126 | orchestrator | 2025-06-22 19:30:03 | INFO  | It takes a moment until task b0a0b125-5eea-4d9a-9cf7-16c1c001cfbb (operator) has been started and output is visible here. 2025-06-22 19:30:07.799348 | orchestrator | 2025-06-22 19:30:07.799445 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-22 19:30:07.799462 | orchestrator | 2025-06-22 19:30:07.799622 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:30:07.800428 | orchestrator | Sunday 22 June 2025 19:30:07 +0000 (0:00:00.133) 0:00:00.133 *********** 2025-06-22 19:30:10.806005 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:30:10.809159 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:30:10.810113 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:30:10.811731 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:30:10.812677 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:30:10.813153 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:30:10.813883 | orchestrator | 2025-06-22 19:30:10.814493 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-22 19:30:10.815423 | orchestrator | Sunday 22 June 2025 19:30:10 +0000 (0:00:03.011) 0:00:03.144 *********** 2025-06-22 19:30:11.467904 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:30:11.468053 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:30:11.470890 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:30:11.471376 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:30:11.472444 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:30:11.473035 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:30:11.473376 | orchestrator | 2025-06-22 19:30:11.474079 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-22 19:30:11.474760 | orchestrator | 2025-06-22 19:30:11.475659 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-22 19:30:11.475845 | orchestrator | Sunday 22 June 2025 19:30:11 +0000 (0:00:00.662) 0:00:03.807 *********** 2025-06-22 19:30:11.523203 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:30:11.544038 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:30:11.560804 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:30:11.609172 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:30:11.609767 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:30:11.610740 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:30:11.611787 | orchestrator | 2025-06-22 19:30:11.612402 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-22 19:30:11.613255 | orchestrator | Sunday 22 June 2025 19:30:11 +0000 (0:00:00.139) 0:00:03.946 *********** 2025-06-22 19:30:11.659346 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:30:11.700157 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:30:11.738200 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:30:11.738935 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:30:11.739617 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:30:11.739957 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:30:11.740641 | orchestrator | 2025-06-22 19:30:11.741288 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-22 19:30:11.741808 | orchestrator | Sunday 22 June 2025 19:30:11 +0000 (0:00:00.130) 0:00:04.076 *********** 2025-06-22 19:30:12.252884 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:30:12.253035 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:30:12.254106 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:30:12.255087 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:30:12.256539 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:30:12.257438 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:30:12.258530 | orchestrator | 2025-06-22 19:30:12.259619 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-22 19:30:12.260197 | orchestrator | Sunday 22 June 2025 19:30:12 +0000 (0:00:00.515) 0:00:04.591 *********** 2025-06-22 19:30:12.955373 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:30:12.956164 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:30:12.956911 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:30:12.957680 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:30:12.958257 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:30:12.959370 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:30:12.960024 | orchestrator | 2025-06-22 19:30:12.960719 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-22 19:30:12.961358 | orchestrator | Sunday 22 June 2025 19:30:12 +0000 (0:00:00.698) 0:00:05.290 *********** 2025-06-22 19:30:14.875242 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-22 19:30:14.875391 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-22 19:30:14.875417 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-22 19:30:14.875528 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-22 19:30:14.876120 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-22 19:30:14.876506 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-22 19:30:14.877555 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-22 19:30:14.877601 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-22 19:30:14.877657 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-22 19:30:14.878333 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-22 19:30:14.878392 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-22 19:30:14.879093 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-22 19:30:14.879116 | orchestrator | 2025-06-22 19:30:14.879461 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-22 19:30:14.879634 | orchestrator | Sunday 22 June 2025 19:30:14 +0000 (0:00:01.919) 0:00:07.210 *********** 2025-06-22 19:30:15.972007 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:30:15.972097 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:30:15.972112 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:30:15.972123 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:30:15.972275 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:30:15.972732 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:30:15.972970 | orchestrator | 2025-06-22 19:30:15.973399 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-22 19:30:15.973881 | orchestrator | Sunday 22 June 2025 19:30:15 +0000 (0:00:01.095) 0:00:08.306 *********** 2025-06-22 19:30:17.017208 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-22 19:30:17.018063 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-22 19:30:17.018100 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-22 19:30:17.096902 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:30:17.097336 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:30:17.098401 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:30:17.099477 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:30:17.100242 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:30:17.100892 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:30:17.101661 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-22 19:30:17.102231 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-22 19:30:17.102685 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-22 19:30:17.103558 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-22 19:30:17.104042 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-22 19:30:17.104674 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-22 19:30:17.105142 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:30:17.105809 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:30:17.106257 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:30:17.106770 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:30:17.107227 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:30:17.107877 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:30:17.108996 | orchestrator | 2025-06-22 19:30:17.109169 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-22 19:30:17.109777 | orchestrator | Sunday 22 June 2025 19:30:17 +0000 (0:00:01.129) 0:00:09.435 *********** 2025-06-22 19:30:17.606761 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:30:17.607645 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:30:17.608355 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:30:17.609747 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:30:17.610502 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:30:17.613339 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:30:17.613369 | orchestrator | 2025-06-22 19:30:17.613381 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-22 19:30:17.613393 | orchestrator | Sunday 22 June 2025 19:30:17 +0000 (0:00:00.508) 0:00:09.943 *********** 2025-06-22 19:30:17.683821 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:30:17.701191 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:30:17.736468 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:30:17.736519 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:30:17.736635 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:30:17.736686 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:30:17.737427 | orchestrator | 2025-06-22 19:30:17.738101 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-22 19:30:17.738123 | orchestrator | Sunday 22 June 2025 19:30:17 +0000 (0:00:00.132) 0:00:10.075 *********** 2025-06-22 19:30:18.362529 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-22 19:30:18.365306 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:30:18.366316 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 19:30:18.367141 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-22 19:30:18.367560 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:30:18.368613 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:30:18.369897 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 19:30:18.370284 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:30:18.371053 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 19:30:18.372030 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:30:18.372629 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 19:30:18.373382 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:30:18.374206 | orchestrator | 2025-06-22 19:30:18.374472 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-22 19:30:18.374974 | orchestrator | Sunday 22 June 2025 19:30:18 +0000 (0:00:00.623) 0:00:10.699 *********** 2025-06-22 19:30:18.421832 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:30:18.440528 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:30:18.457354 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:30:18.479438 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:30:18.479966 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:30:18.480758 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:30:18.481783 | orchestrator | 2025-06-22 19:30:18.482637 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-22 19:30:18.483018 | orchestrator | Sunday 22 June 2025 19:30:18 +0000 (0:00:00.119) 0:00:10.819 *********** 2025-06-22 19:30:18.542454 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:30:18.559484 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:30:18.581066 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:30:18.600094 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:30:18.600896 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:30:18.601420 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:30:18.602204 | orchestrator | 2025-06-22 19:30:18.602865 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-22 19:30:18.603634 | orchestrator | Sunday 22 June 2025 19:30:18 +0000 (0:00:00.120) 0:00:10.939 *********** 2025-06-22 19:30:18.653386 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:30:18.669237 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:30:18.686782 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:30:18.707421 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:30:18.708491 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:30:18.709389 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:30:18.710162 | orchestrator | 2025-06-22 19:30:18.710974 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-22 19:30:18.711852 | orchestrator | Sunday 22 June 2025 19:30:18 +0000 (0:00:00.107) 0:00:11.046 *********** 2025-06-22 19:30:20.293653 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:30:20.293762 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:30:20.294775 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:30:20.294809 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:30:20.295189 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:30:20.296871 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:30:20.297439 | orchestrator | 2025-06-22 19:30:20.297884 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-22 19:30:20.298455 | orchestrator | Sunday 22 June 2025 19:30:20 +0000 (0:00:01.583) 0:00:12.629 *********** 2025-06-22 19:30:20.376478 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:30:20.402708 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:30:20.510663 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:30:20.511126 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:30:20.511955 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:30:20.512685 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:30:20.513750 | orchestrator | 2025-06-22 19:30:20.515345 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:30:20.515409 | orchestrator | 2025-06-22 19:30:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:30:20.516269 | orchestrator | 2025-06-22 19:30:20 | INFO  | Please wait and do not abort execution. 2025-06-22 19:30:20.516401 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:30:20.516825 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:30:20.517624 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:30:20.518119 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:30:20.518942 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:30:20.519818 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:30:20.520221 | orchestrator | 2025-06-22 19:30:20.520995 | orchestrator | 2025-06-22 19:30:20.521477 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:30:20.521971 | orchestrator | Sunday 22 June 2025 19:30:20 +0000 (0:00:00.219) 0:00:12.848 *********** 2025-06-22 19:30:20.522637 | orchestrator | =============================================================================== 2025-06-22 19:30:20.523159 | orchestrator | Gathering Facts --------------------------------------------------------- 3.01s 2025-06-22 19:30:20.523868 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.92s 2025-06-22 19:30:20.524645 | orchestrator | osism.commons.operator : Set password ----------------------------------- 1.58s 2025-06-22 19:30:20.524672 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.13s 2025-06-22 19:30:20.524971 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.10s 2025-06-22 19:30:20.525507 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.70s 2025-06-22 19:30:20.525948 | orchestrator | Do not require tty for all users ---------------------------------------- 0.66s 2025-06-22 19:30:20.526590 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.62s 2025-06-22 19:30:20.527309 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.52s 2025-06-22 19:30:20.527681 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.51s 2025-06-22 19:30:20.528212 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-06-22 19:30:20.528520 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2025-06-22 19:30:20.528963 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.13s 2025-06-22 19:30:20.529388 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.13s 2025-06-22 19:30:20.529887 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.12s 2025-06-22 19:30:20.530402 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.12s 2025-06-22 19:30:20.530662 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.11s 2025-06-22 19:30:20.992670 | orchestrator | + osism apply --environment custom facts 2025-06-22 19:30:22.713821 | orchestrator | 2025-06-22 19:30:22 | INFO  | Trying to run play facts in environment custom 2025-06-22 19:30:22.720735 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:30:22.720776 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:30:22.720786 | orchestrator | Registering Redlock._release_script 2025-06-22 19:30:22.781652 | orchestrator | 2025-06-22 19:30:22 | INFO  | Task d6296ebe-c495-4d9e-8cb9-710e8bc297bc (facts) was prepared for execution. 2025-06-22 19:30:22.781712 | orchestrator | 2025-06-22 19:30:22 | INFO  | It takes a moment until task d6296ebe-c495-4d9e-8cb9-710e8bc297bc (facts) has been started and output is visible here. 2025-06-22 19:30:26.554267 | orchestrator | 2025-06-22 19:30:26.554365 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-22 19:30:26.555244 | orchestrator | 2025-06-22 19:30:26.556146 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-22 19:30:26.556838 | orchestrator | Sunday 22 June 2025 19:30:26 +0000 (0:00:00.066) 0:00:00.066 *********** 2025-06-22 19:30:27.828214 | orchestrator | ok: [testbed-manager] 2025-06-22 19:30:27.828310 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:30:27.828395 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:30:27.830421 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:30:27.831473 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:30:27.832458 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:30:27.833124 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:30:27.834116 | orchestrator | 2025-06-22 19:30:27.834532 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-22 19:30:27.835554 | orchestrator | Sunday 22 June 2025 19:30:27 +0000 (0:00:01.271) 0:00:01.337 *********** 2025-06-22 19:30:28.891553 | orchestrator | ok: [testbed-manager] 2025-06-22 19:30:28.892173 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:30:28.893042 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:30:28.893914 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:30:28.894651 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:30:28.895398 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:30:28.896126 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:30:28.896911 | orchestrator | 2025-06-22 19:30:28.897397 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-22 19:30:28.897951 | orchestrator | 2025-06-22 19:30:28.898394 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-22 19:30:28.898920 | orchestrator | Sunday 22 June 2025 19:30:28 +0000 (0:00:01.066) 0:00:02.404 *********** 2025-06-22 19:30:28.993931 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:30:28.994366 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:30:28.994814 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:30:28.995292 | orchestrator | 2025-06-22 19:30:28.998151 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-22 19:30:28.998177 | orchestrator | Sunday 22 June 2025 19:30:28 +0000 (0:00:00.103) 0:00:02.507 *********** 2025-06-22 19:30:29.139048 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:30:29.139627 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:30:29.140009 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:30:29.140261 | orchestrator | 2025-06-22 19:30:29.140630 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-22 19:30:29.141264 | orchestrator | Sunday 22 June 2025 19:30:29 +0000 (0:00:00.146) 0:00:02.654 *********** 2025-06-22 19:30:29.298779 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:30:29.298998 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:30:29.299601 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:30:29.299895 | orchestrator | 2025-06-22 19:30:29.300568 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-22 19:30:29.300754 | orchestrator | Sunday 22 June 2025 19:30:29 +0000 (0:00:00.159) 0:00:02.813 *********** 2025-06-22 19:30:29.416254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:30:29.417059 | orchestrator | 2025-06-22 19:30:29.417902 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-22 19:30:29.419818 | orchestrator | Sunday 22 June 2025 19:30:29 +0000 (0:00:00.116) 0:00:02.929 *********** 2025-06-22 19:30:29.814758 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:30:29.815343 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:30:29.816215 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:30:29.816749 | orchestrator | 2025-06-22 19:30:29.817597 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-22 19:30:29.818010 | orchestrator | Sunday 22 June 2025 19:30:29 +0000 (0:00:00.397) 0:00:03.327 *********** 2025-06-22 19:30:29.914681 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:30:29.915255 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:30:29.916633 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:30:29.917544 | orchestrator | 2025-06-22 19:30:29.920041 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-22 19:30:29.920420 | orchestrator | Sunday 22 June 2025 19:30:29 +0000 (0:00:00.101) 0:00:03.428 *********** 2025-06-22 19:30:30.846369 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:30:30.846754 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:30:30.847504 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:30:30.848288 | orchestrator | 2025-06-22 19:30:30.849100 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-22 19:30:30.849727 | orchestrator | Sunday 22 June 2025 19:30:30 +0000 (0:00:00.929) 0:00:04.358 *********** 2025-06-22 19:30:31.259856 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:30:31.260079 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:30:31.260689 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:30:31.261392 | orchestrator | 2025-06-22 19:30:31.262096 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-22 19:30:31.262390 | orchestrator | Sunday 22 June 2025 19:30:31 +0000 (0:00:00.409) 0:00:04.768 *********** 2025-06-22 19:30:32.177904 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:30:32.180042 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:30:32.180071 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:30:32.180798 | orchestrator | 2025-06-22 19:30:32.183418 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-22 19:30:32.184125 | orchestrator | Sunday 22 June 2025 19:30:32 +0000 (0:00:00.922) 0:00:05.690 *********** 2025-06-22 19:30:44.983461 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:30:44.983622 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:30:44.983640 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:30:44.983653 | orchestrator | 2025-06-22 19:30:44.983666 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-22 19:30:44.984070 | orchestrator | Sunday 22 June 2025 19:30:44 +0000 (0:00:12.798) 0:00:18.489 *********** 2025-06-22 19:30:45.086400 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:30:45.087368 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:30:45.088771 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:30:45.088796 | orchestrator | 2025-06-22 19:30:45.089714 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-22 19:30:45.090065 | orchestrator | Sunday 22 June 2025 19:30:45 +0000 (0:00:00.109) 0:00:18.598 *********** 2025-06-22 19:30:51.374368 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:30:51.374729 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:30:51.375454 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:30:51.376504 | orchestrator | 2025-06-22 19:30:51.376793 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-22 19:30:51.377340 | orchestrator | Sunday 22 June 2025 19:30:51 +0000 (0:00:06.285) 0:00:24.884 *********** 2025-06-22 19:30:51.897214 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:30:51.898461 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:30:51.899245 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:30:51.900102 | orchestrator | 2025-06-22 19:30:51.901674 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-22 19:30:51.903099 | orchestrator | Sunday 22 June 2025 19:30:51 +0000 (0:00:00.525) 0:00:25.410 *********** 2025-06-22 19:30:54.993815 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-22 19:30:54.993964 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-22 19:30:54.994104 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-22 19:30:54.995070 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-22 19:30:54.997474 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-22 19:30:54.998298 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-22 19:30:55.001715 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-22 19:30:55.001804 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-22 19:30:55.001820 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-22 19:30:55.001882 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-22 19:30:55.001893 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-22 19:30:55.001905 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-22 19:30:55.001985 | orchestrator | 2025-06-22 19:30:55.003272 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-22 19:30:55.003301 | orchestrator | Sunday 22 June 2025 19:30:54 +0000 (0:00:03.095) 0:00:28.505 *********** 2025-06-22 19:30:55.994933 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:30:55.995391 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:30:55.996473 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:30:55.999704 | orchestrator | 2025-06-22 19:30:55.999748 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:30:56.000384 | orchestrator | 2025-06-22 19:30:56.001542 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:30:56.003597 | orchestrator | Sunday 22 June 2025 19:30:55 +0000 (0:00:01.000) 0:00:29.506 *********** 2025-06-22 19:30:59.561138 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:30:59.561254 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:30:59.561269 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:30:59.561281 | orchestrator | ok: [testbed-manager] 2025-06-22 19:30:59.561352 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:30:59.561803 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:30:59.562191 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:30:59.563766 | orchestrator | 2025-06-22 19:30:59.564673 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:30:59.564728 | orchestrator | 2025-06-22 19:30:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:30:59.564745 | orchestrator | 2025-06-22 19:30:59 | INFO  | Please wait and do not abort execution. 2025-06-22 19:30:59.564982 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:30:59.565267 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:30:59.565625 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:30:59.566074 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:30:59.566916 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:30:59.566943 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:30:59.567166 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:30:59.567186 | orchestrator | 2025-06-22 19:30:59.567570 | orchestrator | 2025-06-22 19:30:59.567853 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:30:59.568142 | orchestrator | Sunday 22 June 2025 19:30:59 +0000 (0:00:03.567) 0:00:33.073 *********** 2025-06-22 19:30:59.568474 | orchestrator | =============================================================================== 2025-06-22 19:30:59.568775 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.80s 2025-06-22 19:30:59.569220 | orchestrator | Install required packages (Debian) -------------------------------------- 6.29s 2025-06-22 19:30:59.569744 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.57s 2025-06-22 19:30:59.569910 | orchestrator | Copy fact files --------------------------------------------------------- 3.10s 2025-06-22 19:30:59.570398 | orchestrator | Create custom facts directory ------------------------------------------- 1.27s 2025-06-22 19:30:59.570528 | orchestrator | Copy fact file ---------------------------------------------------------- 1.07s 2025-06-22 19:30:59.571045 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.00s 2025-06-22 19:30:59.571289 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.93s 2025-06-22 19:30:59.571749 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.92s 2025-06-22 19:30:59.571947 | orchestrator | Create custom facts directory ------------------------------------------- 0.53s 2025-06-22 19:30:59.572277 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.41s 2025-06-22 19:30:59.572509 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2025-06-22 19:30:59.572935 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.16s 2025-06-22 19:30:59.573144 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.15s 2025-06-22 19:30:59.573455 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2025-06-22 19:30:59.573735 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-06-22 19:30:59.574351 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-06-22 19:30:59.574433 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-06-22 19:31:00.186761 | orchestrator | + osism apply bootstrap 2025-06-22 19:31:01.816314 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:31:01.816416 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:31:01.816432 | orchestrator | Registering Redlock._release_script 2025-06-22 19:31:01.871837 | orchestrator | 2025-06-22 19:31:01 | INFO  | Task 88a754a0-e204-40d5-83a3-ea4fed360437 (bootstrap) was prepared for execution. 2025-06-22 19:31:01.871924 | orchestrator | 2025-06-22 19:31:01 | INFO  | It takes a moment until task 88a754a0-e204-40d5-83a3-ea4fed360437 (bootstrap) has been started and output is visible here. 2025-06-22 19:31:05.917161 | orchestrator | 2025-06-22 19:31:05.917270 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-22 19:31:05.917286 | orchestrator | 2025-06-22 19:31:05.917298 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-22 19:31:05.918499 | orchestrator | Sunday 22 June 2025 19:31:05 +0000 (0:00:00.161) 0:00:00.161 *********** 2025-06-22 19:31:05.991441 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:06.020081 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:06.044879 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:06.071563 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:06.144256 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:06.145098 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:06.145856 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:06.146795 | orchestrator | 2025-06-22 19:31:06.147787 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:31:06.148724 | orchestrator | 2025-06-22 19:31:06.149844 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:31:06.150686 | orchestrator | Sunday 22 June 2025 19:31:06 +0000 (0:00:00.233) 0:00:00.395 *********** 2025-06-22 19:31:09.483214 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:09.483394 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:09.483689 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:09.484456 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:09.485240 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:09.485757 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:09.486698 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:09.487052 | orchestrator | 2025-06-22 19:31:09.487487 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-22 19:31:09.488233 | orchestrator | 2025-06-22 19:31:09.488408 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:31:09.488970 | orchestrator | Sunday 22 June 2025 19:31:09 +0000 (0:00:03.341) 0:00:03.736 *********** 2025-06-22 19:31:09.598642 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-22 19:31:09.598950 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-22 19:31:09.602133 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-22 19:31:09.622336 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 19:31:09.622681 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-22 19:31:09.623423 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-22 19:31:09.625834 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-22 19:31:09.666789 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-22 19:31:09.667625 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 19:31:09.668901 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-22 19:31:09.669337 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-22 19:31:09.670093 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-22 19:31:09.670541 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-22 19:31:09.670987 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 19:31:09.671412 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-22 19:31:10.008237 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-22 19:31:10.009741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-22 19:31:10.010929 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-22 19:31:10.012232 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:31:10.013524 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-22 19:31:10.014196 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-22 19:31:10.015399 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-22 19:31:10.015905 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-22 19:31:10.016792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 19:31:10.017812 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-22 19:31:10.018286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-22 19:31:10.019332 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-22 19:31:10.020012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 19:31:10.020776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-22 19:31:10.021692 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-22 19:31:10.022419 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-22 19:31:10.022756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 19:31:10.023497 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:31:10.024637 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-22 19:31:10.025304 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-22 19:31:10.026154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:31:10.026837 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-22 19:31:10.027503 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-22 19:31:10.027761 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:31:10.028459 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-22 19:31:10.028986 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-22 19:31:10.029791 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:31:10.030299 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-22 19:31:10.031025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:31:10.034426 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-22 19:31:10.034643 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-22 19:31:10.035511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:31:10.036002 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:10.036299 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-22 19:31:10.039056 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-22 19:31:10.039533 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-22 19:31:10.039704 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-22 19:31:10.040658 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:10.041353 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-22 19:31:10.041623 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-22 19:31:10.042896 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:10.043316 | orchestrator | 2025-06-22 19:31:10.043865 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-22 19:31:10.044401 | orchestrator | 2025-06-22 19:31:10.045156 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-22 19:31:10.045267 | orchestrator | Sunday 22 June 2025 19:31:10 +0000 (0:00:00.523) 0:00:04.260 *********** 2025-06-22 19:31:11.133181 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:11.134186 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:11.134982 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:11.135411 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:11.136227 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:11.136900 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:11.137604 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:11.138375 | orchestrator | 2025-06-22 19:31:11.139277 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-22 19:31:11.139774 | orchestrator | Sunday 22 June 2025 19:31:11 +0000 (0:00:01.123) 0:00:05.384 *********** 2025-06-22 19:31:12.289537 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:12.291227 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:12.291487 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:12.291929 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:12.292857 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:12.294079 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:12.296602 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:12.296642 | orchestrator | 2025-06-22 19:31:12.296655 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-22 19:31:12.296670 | orchestrator | Sunday 22 June 2025 19:31:12 +0000 (0:00:01.155) 0:00:06.539 *********** 2025-06-22 19:31:12.558921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:31:12.559666 | orchestrator | 2025-06-22 19:31:12.563478 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-22 19:31:12.563514 | orchestrator | Sunday 22 June 2025 19:31:12 +0000 (0:00:00.270) 0:00:06.810 *********** 2025-06-22 19:31:14.502809 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:14.502918 | orchestrator | changed: [testbed-manager] 2025-06-22 19:31:14.503419 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:14.504426 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:14.504597 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:14.505377 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:14.507703 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:14.507838 | orchestrator | 2025-06-22 19:31:14.508346 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-22 19:31:14.509148 | orchestrator | Sunday 22 June 2025 19:31:14 +0000 (0:00:01.941) 0:00:08.752 *********** 2025-06-22 19:31:14.572988 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:31:14.763073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:31:14.763804 | orchestrator | 2025-06-22 19:31:14.764858 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-22 19:31:14.767019 | orchestrator | Sunday 22 June 2025 19:31:14 +0000 (0:00:00.262) 0:00:09.014 *********** 2025-06-22 19:31:15.694014 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:15.695716 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:15.695763 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:15.695776 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:15.695839 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:15.696784 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:15.696963 | orchestrator | 2025-06-22 19:31:15.697506 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-22 19:31:15.697762 | orchestrator | Sunday 22 June 2025 19:31:15 +0000 (0:00:00.929) 0:00:09.943 *********** 2025-06-22 19:31:15.748544 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:31:16.201112 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:16.202274 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:16.202351 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:16.203121 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:16.204252 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:16.205161 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:16.205984 | orchestrator | 2025-06-22 19:31:16.207405 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-22 19:31:16.208244 | orchestrator | Sunday 22 June 2025 19:31:16 +0000 (0:00:00.507) 0:00:10.451 *********** 2025-06-22 19:31:16.299653 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:31:16.320634 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:31:16.356838 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:31:16.621235 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:16.622485 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:16.623755 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:16.624902 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:16.626094 | orchestrator | 2025-06-22 19:31:16.627044 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-22 19:31:16.628220 | orchestrator | Sunday 22 June 2025 19:31:16 +0000 (0:00:00.421) 0:00:10.872 *********** 2025-06-22 19:31:16.695164 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:31:16.722540 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:31:16.745088 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:31:16.777423 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:31:16.841559 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:16.843360 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:16.844404 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:16.845612 | orchestrator | 2025-06-22 19:31:16.846734 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-22 19:31:16.847741 | orchestrator | Sunday 22 June 2025 19:31:16 +0000 (0:00:00.221) 0:00:11.093 *********** 2025-06-22 19:31:17.127301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:31:17.128312 | orchestrator | 2025-06-22 19:31:17.129101 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-22 19:31:17.129625 | orchestrator | Sunday 22 June 2025 19:31:17 +0000 (0:00:00.280) 0:00:11.374 *********** 2025-06-22 19:31:17.413423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:31:17.413630 | orchestrator | 2025-06-22 19:31:17.414428 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-22 19:31:17.415284 | orchestrator | Sunday 22 June 2025 19:31:17 +0000 (0:00:00.291) 0:00:11.665 *********** 2025-06-22 19:31:18.598595 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:18.599920 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:18.600521 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:18.601418 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:18.602421 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:18.603087 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:18.603863 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:18.604493 | orchestrator | 2025-06-22 19:31:18.605398 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-22 19:31:18.606095 | orchestrator | Sunday 22 June 2025 19:31:18 +0000 (0:00:01.183) 0:00:12.848 *********** 2025-06-22 19:31:18.680106 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:31:18.700815 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:31:18.730765 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:31:18.756833 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:31:18.808656 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:18.808731 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:18.808960 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:18.808982 | orchestrator | 2025-06-22 19:31:18.809312 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-22 19:31:18.809512 | orchestrator | Sunday 22 June 2025 19:31:18 +0000 (0:00:00.212) 0:00:13.061 *********** 2025-06-22 19:31:19.305119 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:19.306211 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:19.307292 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:19.308207 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:19.309657 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:19.310962 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:19.311757 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:19.312539 | orchestrator | 2025-06-22 19:31:19.312887 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-22 19:31:19.313471 | orchestrator | Sunday 22 June 2025 19:31:19 +0000 (0:00:00.495) 0:00:13.556 *********** 2025-06-22 19:31:19.416634 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:31:19.443549 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:31:19.471908 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:31:19.562757 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:31:19.562924 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:19.563043 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:19.563187 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:19.563718 | orchestrator | 2025-06-22 19:31:19.564307 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-22 19:31:19.564698 | orchestrator | Sunday 22 June 2025 19:31:19 +0000 (0:00:00.256) 0:00:13.812 *********** 2025-06-22 19:31:20.105747 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:20.105858 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:20.106389 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:20.107234 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:20.108072 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:20.108745 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:20.109507 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:20.110795 | orchestrator | 2025-06-22 19:31:20.111734 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-22 19:31:20.112550 | orchestrator | Sunday 22 June 2025 19:31:20 +0000 (0:00:00.539) 0:00:14.352 *********** 2025-06-22 19:31:21.131372 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:21.131451 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:21.131961 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:21.133348 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:21.135096 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:21.135870 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:21.136850 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:21.137656 | orchestrator | 2025-06-22 19:31:21.138538 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-22 19:31:21.139067 | orchestrator | Sunday 22 June 2025 19:31:21 +0000 (0:00:01.029) 0:00:15.381 *********** 2025-06-22 19:31:22.342218 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:22.342385 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:22.344057 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:22.344925 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:22.345366 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:22.346850 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:22.348050 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:22.348747 | orchestrator | 2025-06-22 19:31:22.349632 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-22 19:31:22.350355 | orchestrator | Sunday 22 June 2025 19:31:22 +0000 (0:00:01.208) 0:00:16.589 *********** 2025-06-22 19:31:22.703371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:31:22.704263 | orchestrator | 2025-06-22 19:31:22.705322 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-22 19:31:22.707077 | orchestrator | Sunday 22 June 2025 19:31:22 +0000 (0:00:00.365) 0:00:16.954 *********** 2025-06-22 19:31:22.783528 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:31:23.890862 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:23.890985 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:23.890998 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:23.891007 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:23.891016 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:23.891025 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:23.891034 | orchestrator | 2025-06-22 19:31:23.891461 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-22 19:31:23.892055 | orchestrator | Sunday 22 June 2025 19:31:23 +0000 (0:00:01.180) 0:00:18.135 *********** 2025-06-22 19:31:23.958309 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:23.985552 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:24.010886 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:24.057326 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:24.149418 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:24.150084 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:24.150877 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:24.151566 | orchestrator | 2025-06-22 19:31:24.152335 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-22 19:31:24.153013 | orchestrator | Sunday 22 June 2025 19:31:24 +0000 (0:00:00.265) 0:00:18.401 *********** 2025-06-22 19:31:24.230392 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:24.260730 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:24.283771 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:24.313730 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:24.394904 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:24.395447 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:24.396882 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:24.398157 | orchestrator | 2025-06-22 19:31:24.399190 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-22 19:31:24.400107 | orchestrator | Sunday 22 June 2025 19:31:24 +0000 (0:00:00.245) 0:00:18.646 *********** 2025-06-22 19:31:24.484748 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:24.516054 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:24.540794 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:24.568055 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:24.648164 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:24.648367 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:24.649163 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:24.649914 | orchestrator | 2025-06-22 19:31:24.650327 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-22 19:31:24.651521 | orchestrator | Sunday 22 June 2025 19:31:24 +0000 (0:00:00.254) 0:00:18.900 *********** 2025-06-22 19:31:24.937177 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:31:24.937729 | orchestrator | 2025-06-22 19:31:24.941541 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-22 19:31:24.941565 | orchestrator | Sunday 22 June 2025 19:31:24 +0000 (0:00:00.287) 0:00:19.188 *********** 2025-06-22 19:31:25.389368 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:25.389655 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:25.390505 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:25.390976 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:25.391755 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:25.392395 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:25.393231 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:25.393619 | orchestrator | 2025-06-22 19:31:25.395049 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-22 19:31:25.396052 | orchestrator | Sunday 22 June 2025 19:31:25 +0000 (0:00:00.453) 0:00:19.641 *********** 2025-06-22 19:31:25.473148 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:31:25.516158 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:31:25.553753 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:31:25.587944 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:31:25.670708 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:25.672168 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:25.673449 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:25.673627 | orchestrator | 2025-06-22 19:31:25.675366 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-22 19:31:25.676108 | orchestrator | Sunday 22 June 2025 19:31:25 +0000 (0:00:00.281) 0:00:19.922 *********** 2025-06-22 19:31:26.640258 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:26.640725 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:26.641585 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:26.642975 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:26.643874 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:26.644249 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:26.645657 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:26.646524 | orchestrator | 2025-06-22 19:31:26.647424 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-22 19:31:26.647767 | orchestrator | Sunday 22 June 2025 19:31:26 +0000 (0:00:00.966) 0:00:20.889 *********** 2025-06-22 19:31:27.176165 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:27.177219 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:27.177795 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:27.178870 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:27.180343 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:27.181315 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:27.182934 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:27.184223 | orchestrator | 2025-06-22 19:31:27.184747 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-22 19:31:27.185834 | orchestrator | Sunday 22 June 2025 19:31:27 +0000 (0:00:00.536) 0:00:21.426 *********** 2025-06-22 19:31:28.148596 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:28.150333 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:28.150383 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:28.151261 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:28.151968 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:28.152924 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:28.153621 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:28.154341 | orchestrator | 2025-06-22 19:31:28.155091 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-22 19:31:28.156044 | orchestrator | Sunday 22 June 2025 19:31:28 +0000 (0:00:00.972) 0:00:22.398 *********** 2025-06-22 19:31:41.331228 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:41.332799 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:41.333414 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:41.333772 | orchestrator | changed: [testbed-manager] 2025-06-22 19:31:41.334537 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:41.337448 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:41.338095 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:41.338726 | orchestrator | 2025-06-22 19:31:41.343069 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-22 19:31:41.344282 | orchestrator | Sunday 22 June 2025 19:31:41 +0000 (0:00:13.180) 0:00:35.579 *********** 2025-06-22 19:31:41.422639 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:41.452355 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:41.480393 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:41.511872 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:41.574798 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:41.574882 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:41.574895 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:41.574907 | orchestrator | 2025-06-22 19:31:41.574919 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-22 19:31:41.575003 | orchestrator | Sunday 22 June 2025 19:31:41 +0000 (0:00:00.247) 0:00:35.826 *********** 2025-06-22 19:31:41.647894 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:41.674748 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:41.696327 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:41.722812 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:41.776433 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:41.776530 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:41.776699 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:41.776798 | orchestrator | 2025-06-22 19:31:41.777258 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-22 19:31:41.777458 | orchestrator | Sunday 22 June 2025 19:31:41 +0000 (0:00:00.202) 0:00:36.029 *********** 2025-06-22 19:31:41.852642 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:41.877896 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:41.897764 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:41.923066 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:41.990008 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:41.991068 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:41.991307 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:41.992725 | orchestrator | 2025-06-22 19:31:41.993121 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-22 19:31:41.996022 | orchestrator | Sunday 22 June 2025 19:31:41 +0000 (0:00:00.213) 0:00:36.242 *********** 2025-06-22 19:31:42.257066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:31:42.257733 | orchestrator | 2025-06-22 19:31:42.259522 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-22 19:31:42.259549 | orchestrator | Sunday 22 June 2025 19:31:42 +0000 (0:00:00.265) 0:00:36.508 *********** 2025-06-22 19:31:43.563620 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:43.563842 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:43.564924 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:43.567048 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:43.567796 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:43.568732 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:43.569396 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:43.569798 | orchestrator | 2025-06-22 19:31:43.570692 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-22 19:31:43.571179 | orchestrator | Sunday 22 June 2025 19:31:43 +0000 (0:00:01.304) 0:00:37.813 *********** 2025-06-22 19:31:44.498825 | orchestrator | changed: [testbed-manager] 2025-06-22 19:31:44.498933 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:44.499317 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:44.504082 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:44.504952 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:44.506754 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:44.506865 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:44.507795 | orchestrator | 2025-06-22 19:31:44.508708 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-22 19:31:44.509428 | orchestrator | Sunday 22 June 2025 19:31:44 +0000 (0:00:00.936) 0:00:38.750 *********** 2025-06-22 19:31:45.255406 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:45.255979 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:45.257667 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:45.258939 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:45.259928 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:45.261194 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:45.262123 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:45.262449 | orchestrator | 2025-06-22 19:31:45.263252 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-22 19:31:45.263981 | orchestrator | Sunday 22 June 2025 19:31:45 +0000 (0:00:00.754) 0:00:39.505 *********** 2025-06-22 19:31:45.548858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:31:45.549120 | orchestrator | 2025-06-22 19:31:45.549993 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-22 19:31:45.552824 | orchestrator | Sunday 22 June 2025 19:31:45 +0000 (0:00:00.296) 0:00:39.801 *********** 2025-06-22 19:31:46.454365 | orchestrator | changed: [testbed-manager] 2025-06-22 19:31:46.456622 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:46.456664 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:46.457261 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:46.459207 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:46.459658 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:46.460952 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:46.461929 | orchestrator | 2025-06-22 19:31:46.462856 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-22 19:31:46.463379 | orchestrator | Sunday 22 June 2025 19:31:46 +0000 (0:00:00.901) 0:00:40.703 *********** 2025-06-22 19:31:46.544055 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:31:46.580790 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:31:46.614336 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:31:46.643669 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:31:46.811231 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:46.811328 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:46.812211 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:46.815601 | orchestrator | 2025-06-22 19:31:46.815628 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-22 19:31:46.815641 | orchestrator | Sunday 22 June 2025 19:31:46 +0000 (0:00:00.358) 0:00:41.061 *********** 2025-06-22 19:31:58.510419 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:58.510646 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:58.510663 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:58.510673 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:58.510682 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:58.511832 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:58.514411 | orchestrator | changed: [testbed-manager] 2025-06-22 19:31:58.514448 | orchestrator | 2025-06-22 19:31:58.514766 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-22 19:31:58.515432 | orchestrator | Sunday 22 June 2025 19:31:58 +0000 (0:00:11.694) 0:00:52.756 *********** 2025-06-22 19:31:59.660129 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:59.663905 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:59.664734 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:59.665598 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:59.666794 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:59.668049 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:59.669171 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:59.670262 | orchestrator | 2025-06-22 19:31:59.671599 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-22 19:31:59.672089 | orchestrator | Sunday 22 June 2025 19:31:59 +0000 (0:00:01.153) 0:00:53.910 *********** 2025-06-22 19:32:00.475619 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:00.477161 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:00.478141 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:00.479395 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:00.480613 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:00.481832 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:00.482705 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:00.483370 | orchestrator | 2025-06-22 19:32:00.484537 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-22 19:32:00.485416 | orchestrator | Sunday 22 June 2025 19:32:00 +0000 (0:00:00.817) 0:00:54.727 *********** 2025-06-22 19:32:00.572349 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:00.603841 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:00.627861 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:00.654448 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:00.709731 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:00.711023 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:00.712245 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:00.713796 | orchestrator | 2025-06-22 19:32:00.714415 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-22 19:32:00.715463 | orchestrator | Sunday 22 June 2025 19:32:00 +0000 (0:00:00.232) 0:00:54.960 *********** 2025-06-22 19:32:00.812768 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:00.840281 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:00.865418 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:00.934949 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:00.935732 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:00.936839 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:00.936862 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:00.937524 | orchestrator | 2025-06-22 19:32:00.937936 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-22 19:32:00.938439 | orchestrator | Sunday 22 June 2025 19:32:00 +0000 (0:00:00.225) 0:00:55.185 *********** 2025-06-22 19:32:01.228917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:32:01.229102 | orchestrator | 2025-06-22 19:32:01.229783 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-22 19:32:01.230423 | orchestrator | Sunday 22 June 2025 19:32:01 +0000 (0:00:00.294) 0:00:55.480 *********** 2025-06-22 19:32:02.533997 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:02.536457 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:02.536505 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:02.537114 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:02.538269 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:02.539327 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:02.540109 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:02.541669 | orchestrator | 2025-06-22 19:32:02.543018 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-22 19:32:02.544409 | orchestrator | Sunday 22 June 2025 19:32:02 +0000 (0:00:01.302) 0:00:56.783 *********** 2025-06-22 19:32:03.151052 | orchestrator | changed: [testbed-manager] 2025-06-22 19:32:03.151519 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:03.152974 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:03.154549 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:03.154652 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:03.155510 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:03.156396 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:03.157319 | orchestrator | 2025-06-22 19:32:03.158097 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-22 19:32:03.158980 | orchestrator | Sunday 22 June 2025 19:32:03 +0000 (0:00:00.617) 0:00:57.401 *********** 2025-06-22 19:32:03.245651 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:03.274160 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:03.299781 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:03.328861 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:03.389039 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:03.389749 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:03.390693 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:03.391661 | orchestrator | 2025-06-22 19:32:03.393223 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-22 19:32:03.393450 | orchestrator | Sunday 22 June 2025 19:32:03 +0000 (0:00:00.239) 0:00:57.641 *********** 2025-06-22 19:32:04.334281 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:04.336341 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:04.336974 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:04.338055 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:04.339340 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:04.340630 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:04.341261 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:04.342112 | orchestrator | 2025-06-22 19:32:04.342997 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-22 19:32:04.343732 | orchestrator | Sunday 22 June 2025 19:32:04 +0000 (0:00:00.943) 0:00:58.584 *********** 2025-06-22 19:32:05.728098 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:05.728700 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:05.729550 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:05.731101 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:05.732367 | orchestrator | changed: [testbed-manager] 2025-06-22 19:32:05.733684 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:05.734069 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:05.735141 | orchestrator | 2025-06-22 19:32:05.736094 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-22 19:32:05.737030 | orchestrator | Sunday 22 June 2025 19:32:05 +0000 (0:00:01.393) 0:00:59.978 *********** 2025-06-22 19:32:07.475265 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:07.475372 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:07.476440 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:07.477703 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:07.479106 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:07.479963 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:07.481111 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:07.482222 | orchestrator | 2025-06-22 19:32:07.482873 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-22 19:32:07.484488 | orchestrator | Sunday 22 June 2025 19:32:07 +0000 (0:00:01.746) 0:01:01.724 *********** 2025-06-22 19:32:41.032679 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:41.032803 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:41.037391 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:41.037456 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:41.037477 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:41.037523 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:41.037674 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:41.040291 | orchestrator | 2025-06-22 19:32:41.041032 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-22 19:32:41.041774 | orchestrator | Sunday 22 June 2025 19:32:41 +0000 (0:00:33.555) 0:01:35.280 *********** 2025-06-22 19:33:49.341985 | orchestrator | changed: [testbed-manager] 2025-06-22 19:33:49.342145 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:49.342161 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:49.342172 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:49.342182 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:49.342717 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:49.343254 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:49.343908 | orchestrator | 2025-06-22 19:33:49.344832 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-22 19:33:49.345106 | orchestrator | Sunday 22 June 2025 19:33:49 +0000 (0:01:08.307) 0:02:43.587 *********** 2025-06-22 19:33:50.640403 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:50.642191 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:50.642232 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:50.643274 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:50.644448 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:50.645211 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:50.645985 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:50.646855 | orchestrator | 2025-06-22 19:33:50.647119 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-22 19:33:50.647996 | orchestrator | Sunday 22 June 2025 19:33:50 +0000 (0:00:01.293) 0:02:44.881 *********** 2025-06-22 19:34:02.512218 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:02.512385 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:02.512405 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:02.512415 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:02.512425 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:02.512435 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:02.516004 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:02.516886 | orchestrator | 2025-06-22 19:34:02.517294 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-22 19:34:02.518090 | orchestrator | Sunday 22 June 2025 19:34:02 +0000 (0:00:11.876) 0:02:56.758 *********** 2025-06-22 19:34:02.965230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-22 19:34:02.966540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-22 19:34:02.967536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-22 19:34:02.970461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-22 19:34:02.970493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-22 19:34:02.970506 | orchestrator | 2025-06-22 19:34:02.971973 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-22 19:34:02.972407 | orchestrator | Sunday 22 June 2025 19:34:02 +0000 (0:00:00.458) 0:02:57.216 *********** 2025-06-22 19:34:03.030067 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 19:34:03.061025 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:34:03.155360 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 19:34:03.155461 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 19:34:03.526452 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:34:03.527126 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:34:03.527390 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 19:34:03.529056 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:34:03.530668 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 19:34:03.532219 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 19:34:03.532724 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 19:34:03.534068 | orchestrator | 2025-06-22 19:34:03.535311 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-22 19:34:03.536019 | orchestrator | Sunday 22 June 2025 19:34:03 +0000 (0:00:00.560) 0:02:57.776 *********** 2025-06-22 19:34:03.599722 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 19:34:03.600652 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 19:34:03.601952 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 19:34:03.605403 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 19:34:03.606457 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 19:34:03.636443 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 19:34:03.636948 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 19:34:03.638428 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 19:34:03.639363 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 19:34:03.640118 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 19:34:03.666151 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:34:03.734465 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 19:34:03.734713 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 19:34:03.734742 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 19:34:03.735034 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 19:34:03.735173 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 19:34:07.773273 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 19:34:07.773350 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 19:34:07.776713 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 19:34:07.779274 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 19:34:07.779294 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 19:34:07.779299 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 19:34:07.780206 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 19:34:07.780529 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 19:34:07.781420 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 19:34:07.783845 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 19:34:07.785090 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:34:07.786722 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 19:34:07.786904 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 19:34:07.788211 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 19:34:07.789078 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 19:34:07.790155 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 19:34:07.791067 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:34:07.791394 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 19:34:07.792070 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 19:34:07.792776 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 19:34:07.793772 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 19:34:07.794053 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 19:34:07.794184 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 19:34:07.794608 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 19:34:07.795142 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 19:34:07.795561 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 19:34:07.796399 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 19:34:07.796686 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:34:07.797165 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-22 19:34:07.797454 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-22 19:34:07.798156 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-22 19:34:07.798611 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-22 19:34:07.799251 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-22 19:34:07.799318 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-22 19:34:07.799651 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-22 19:34:07.800424 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-22 19:34:07.800666 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-22 19:34:07.801008 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-22 19:34:07.801405 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-22 19:34:07.801678 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-22 19:34:07.802147 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-22 19:34:07.802452 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-22 19:34:07.806252 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-22 19:34:07.806438 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-22 19:34:07.806463 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-22 19:34:07.806477 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-22 19:34:07.806490 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-22 19:34:07.806504 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-22 19:34:07.806665 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-22 19:34:07.806943 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-22 19:34:07.807239 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-22 19:34:07.807661 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-22 19:34:07.807952 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-22 19:34:07.808216 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-22 19:34:07.808892 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-22 19:34:07.809019 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-22 19:34:07.809218 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-22 19:34:07.809466 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-22 19:34:07.809723 | orchestrator | 2025-06-22 19:34:07.810098 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-22 19:34:07.810596 | orchestrator | Sunday 22 June 2025 19:34:07 +0000 (0:00:04.246) 0:03:02.023 *********** 2025-06-22 19:34:08.346276 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:34:08.346382 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:34:08.346396 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:34:08.346408 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:34:08.346418 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:34:08.347605 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:34:08.347704 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:34:08.348401 | orchestrator | 2025-06-22 19:34:08.352072 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-22 19:34:08.352421 | orchestrator | Sunday 22 June 2025 19:34:08 +0000 (0:00:00.568) 0:03:02.592 *********** 2025-06-22 19:34:08.404926 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 19:34:08.432894 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 19:34:08.435620 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:34:08.470275 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 19:34:08.470416 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:34:08.502430 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:34:08.502658 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 19:34:08.532920 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:34:08.920853 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-22 19:34:08.921388 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-22 19:34:08.921649 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-22 19:34:08.922853 | orchestrator | 2025-06-22 19:34:08.925474 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-22 19:34:08.925656 | orchestrator | Sunday 22 June 2025 19:34:08 +0000 (0:00:00.579) 0:03:03.171 *********** 2025-06-22 19:34:08.987266 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 19:34:09.017594 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:34:09.017803 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 19:34:09.018427 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 19:34:09.045762 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:34:09.080083 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:34:09.080232 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 19:34:09.104701 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:34:10.545005 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-22 19:34:10.547467 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-22 19:34:10.547600 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-22 19:34:10.548875 | orchestrator | 2025-06-22 19:34:10.549945 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-22 19:34:10.551128 | orchestrator | Sunday 22 June 2025 19:34:10 +0000 (0:00:01.622) 0:03:04.794 *********** 2025-06-22 19:34:10.629505 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:34:10.663464 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:34:10.691216 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:34:10.718403 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:34:10.867174 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:34:10.870826 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:34:10.870871 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:34:10.870883 | orchestrator | 2025-06-22 19:34:10.870896 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-22 19:34:10.870909 | orchestrator | Sunday 22 June 2025 19:34:10 +0000 (0:00:00.322) 0:03:05.117 *********** 2025-06-22 19:34:16.436984 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:16.438234 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:16.438314 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:16.439592 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:16.440769 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:16.441755 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:16.442434 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:16.443372 | orchestrator | 2025-06-22 19:34:16.444247 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-22 19:34:16.444665 | orchestrator | Sunday 22 June 2025 19:34:16 +0000 (0:00:05.570) 0:03:10.687 *********** 2025-06-22 19:34:16.527293 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-22 19:34:16.527470 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-22 19:34:16.573883 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:34:16.574070 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-22 19:34:16.612190 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:34:16.612435 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-22 19:34:16.652929 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:34:16.653066 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-22 19:34:16.692025 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:34:16.769608 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:34:16.770771 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-22 19:34:16.771748 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:34:16.773218 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-22 19:34:16.773354 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:34:16.774540 | orchestrator | 2025-06-22 19:34:16.775325 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-22 19:34:16.776094 | orchestrator | Sunday 22 June 2025 19:34:16 +0000 (0:00:00.333) 0:03:11.021 *********** 2025-06-22 19:34:17.795179 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-22 19:34:17.796031 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-22 19:34:17.796706 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-22 19:34:17.800488 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-22 19:34:17.801218 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-22 19:34:17.801961 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-22 19:34:17.803140 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-22 19:34:17.805476 | orchestrator | 2025-06-22 19:34:17.806476 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-22 19:34:17.807125 | orchestrator | Sunday 22 June 2025 19:34:17 +0000 (0:00:01.023) 0:03:12.045 *********** 2025-06-22 19:34:18.336318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:34:18.338653 | orchestrator | 2025-06-22 19:34:18.338690 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-22 19:34:18.338703 | orchestrator | Sunday 22 June 2025 19:34:18 +0000 (0:00:00.540) 0:03:12.585 *********** 2025-06-22 19:34:19.371286 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:19.371445 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:19.371535 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:19.373232 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:19.373266 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:19.373278 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:19.373289 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:19.373301 | orchestrator | 2025-06-22 19:34:19.374194 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-22 19:34:19.374893 | orchestrator | Sunday 22 June 2025 19:34:19 +0000 (0:00:01.036) 0:03:13.622 *********** 2025-06-22 19:34:19.929888 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:19.930246 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:19.931096 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:19.931987 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:19.932892 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:19.933820 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:19.934498 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:19.935720 | orchestrator | 2025-06-22 19:34:19.936654 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-22 19:34:19.936911 | orchestrator | Sunday 22 June 2025 19:34:19 +0000 (0:00:00.557) 0:03:14.179 *********** 2025-06-22 19:34:20.514153 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:20.515759 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:20.516226 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:20.518055 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:20.519034 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:20.519972 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:20.520802 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:20.521747 | orchestrator | 2025-06-22 19:34:20.522242 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-22 19:34:20.523001 | orchestrator | Sunday 22 June 2025 19:34:20 +0000 (0:00:00.585) 0:03:14.765 *********** 2025-06-22 19:34:21.057997 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:21.058235 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:21.058256 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:21.059809 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:21.060128 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:21.061075 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:21.061671 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:21.063289 | orchestrator | 2025-06-22 19:34:21.063391 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-22 19:34:21.064684 | orchestrator | Sunday 22 June 2025 19:34:21 +0000 (0:00:00.543) 0:03:15.308 *********** 2025-06-22 19:34:21.929097 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619382.132344, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.929701 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619383.3228962, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.930734 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619535.0044448, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.931654 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619370.9402368, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.932335 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619467.6764016, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.934979 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619331.3836672, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.935114 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619377.5008063, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.935131 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619286.516531, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.935143 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619369.358356, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.935154 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619286.9656384, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.935312 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619440.7027705, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.935331 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619353.588149, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.935749 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619281.8860738, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.936094 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619273.8826187, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:34:21.936159 | orchestrator | 2025-06-22 19:34:21.936174 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-22 19:34:21.936288 | orchestrator | Sunday 22 June 2025 19:34:21 +0000 (0:00:00.872) 0:03:16.181 *********** 2025-06-22 19:34:22.977991 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:22.978886 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:22.979748 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:22.982407 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:22.982665 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:22.984027 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:22.985717 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:22.986605 | orchestrator | 2025-06-22 19:34:22.987340 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-22 19:34:22.987708 | orchestrator | Sunday 22 June 2025 19:34:22 +0000 (0:00:01.048) 0:03:17.229 *********** 2025-06-22 19:34:24.063658 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:24.064243 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:24.065136 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:24.065893 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:24.066321 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:24.067033 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:24.067564 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:24.068120 | orchestrator | 2025-06-22 19:34:24.068923 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-22 19:34:24.069772 | orchestrator | Sunday 22 June 2025 19:34:24 +0000 (0:00:01.084) 0:03:18.313 *********** 2025-06-22 19:34:25.140813 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:25.140917 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:25.141027 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:25.141045 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:25.141099 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:25.141398 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:25.142081 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:25.142103 | orchestrator | 2025-06-22 19:34:25.142400 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-22 19:34:25.145343 | orchestrator | Sunday 22 June 2025 19:34:25 +0000 (0:00:01.077) 0:03:19.390 *********** 2025-06-22 19:34:25.213254 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:34:25.262004 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:34:25.307529 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:34:25.357996 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:34:25.396774 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:34:25.461914 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:34:25.462274 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:34:25.463312 | orchestrator | 2025-06-22 19:34:25.464223 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-22 19:34:25.464582 | orchestrator | Sunday 22 June 2025 19:34:25 +0000 (0:00:00.322) 0:03:19.713 *********** 2025-06-22 19:34:26.138813 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:26.139920 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:26.140902 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:26.142609 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:26.143239 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:26.143258 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:26.143748 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:26.144388 | orchestrator | 2025-06-22 19:34:26.144404 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-22 19:34:26.145221 | orchestrator | Sunday 22 June 2025 19:34:26 +0000 (0:00:00.675) 0:03:20.388 *********** 2025-06-22 19:34:26.550276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:34:26.550729 | orchestrator | 2025-06-22 19:34:26.552042 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-22 19:34:26.556293 | orchestrator | Sunday 22 June 2025 19:34:26 +0000 (0:00:00.412) 0:03:20.801 *********** 2025-06-22 19:34:33.064513 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:33.064738 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:33.066211 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:33.068400 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:33.069342 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:33.070006 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:33.071220 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:33.071575 | orchestrator | 2025-06-22 19:34:33.072313 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-22 19:34:33.073067 | orchestrator | Sunday 22 June 2025 19:34:33 +0000 (0:00:06.513) 0:03:27.314 *********** 2025-06-22 19:34:34.075988 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:34.076130 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:34.076353 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:34.078979 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:34.079883 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:34.080874 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:34.081651 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:34.082389 | orchestrator | 2025-06-22 19:34:34.083246 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-22 19:34:34.083934 | orchestrator | Sunday 22 June 2025 19:34:34 +0000 (0:00:01.012) 0:03:28.327 *********** 2025-06-22 19:34:35.060329 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:35.061254 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:35.061996 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:35.063365 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:35.064376 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:35.065804 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:35.066438 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:35.067245 | orchestrator | 2025-06-22 19:34:35.068095 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-22 19:34:35.068922 | orchestrator | Sunday 22 June 2025 19:34:35 +0000 (0:00:00.984) 0:03:29.311 *********** 2025-06-22 19:34:35.616024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:34:35.618183 | orchestrator | 2025-06-22 19:34:35.620004 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-22 19:34:35.621437 | orchestrator | Sunday 22 June 2025 19:34:35 +0000 (0:00:00.551) 0:03:29.863 *********** 2025-06-22 19:34:43.987819 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:43.987939 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:43.988014 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:43.988841 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:43.990299 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:43.992154 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:43.992502 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:43.993410 | orchestrator | 2025-06-22 19:34:43.994122 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-22 19:34:43.994715 | orchestrator | Sunday 22 June 2025 19:34:43 +0000 (0:00:08.370) 0:03:38.233 *********** 2025-06-22 19:34:44.606482 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:44.607612 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:44.608269 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:44.610102 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:44.613276 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:44.614877 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:44.615787 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:44.617145 | orchestrator | 2025-06-22 19:34:44.618223 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-22 19:34:44.619438 | orchestrator | Sunday 22 June 2025 19:34:44 +0000 (0:00:00.621) 0:03:38.855 *********** 2025-06-22 19:34:45.661845 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:45.662077 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:45.663058 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:45.663838 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:45.664891 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:45.665612 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:45.666904 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:45.666984 | orchestrator | 2025-06-22 19:34:45.667697 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-22 19:34:45.668263 | orchestrator | Sunday 22 June 2025 19:34:45 +0000 (0:00:01.056) 0:03:39.911 *********** 2025-06-22 19:34:46.613894 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:46.614397 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:46.617701 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:46.617731 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:46.617743 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:46.617754 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:46.617765 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:46.618440 | orchestrator | 2025-06-22 19:34:46.619135 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-22 19:34:46.622438 | orchestrator | Sunday 22 June 2025 19:34:46 +0000 (0:00:00.952) 0:03:40.864 *********** 2025-06-22 19:34:46.721945 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:46.767343 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:46.800874 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:46.835078 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:46.893487 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:46.893860 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:46.895849 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:46.895901 | orchestrator | 2025-06-22 19:34:46.896810 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-22 19:34:46.897436 | orchestrator | Sunday 22 June 2025 19:34:46 +0000 (0:00:00.281) 0:03:41.146 *********** 2025-06-22 19:34:46.978911 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:47.014867 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:47.094325 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:47.125981 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:47.208183 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:47.209079 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:47.210073 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:47.210181 | orchestrator | 2025-06-22 19:34:47.210292 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-22 19:34:47.210729 | orchestrator | Sunday 22 June 2025 19:34:47 +0000 (0:00:00.315) 0:03:41.461 *********** 2025-06-22 19:34:47.309357 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:47.344890 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:47.374957 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:47.428044 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:47.507062 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:47.507186 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:47.507418 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:47.508746 | orchestrator | 2025-06-22 19:34:47.509240 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-22 19:34:47.510619 | orchestrator | Sunday 22 June 2025 19:34:47 +0000 (0:00:00.294) 0:03:41.755 *********** 2025-06-22 19:34:52.798907 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:52.799209 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:52.799926 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:52.799957 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:52.800243 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:52.800847 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:52.801176 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:52.802192 | orchestrator | 2025-06-22 19:34:52.802219 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-22 19:34:52.802778 | orchestrator | Sunday 22 June 2025 19:34:52 +0000 (0:00:05.293) 0:03:47.048 *********** 2025-06-22 19:34:53.230741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:34:53.231645 | orchestrator | 2025-06-22 19:34:53.233513 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-22 19:34:53.234885 | orchestrator | Sunday 22 June 2025 19:34:53 +0000 (0:00:00.432) 0:03:47.481 *********** 2025-06-22 19:34:53.308054 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-22 19:34:53.308995 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-22 19:34:53.310342 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-22 19:34:53.381143 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-22 19:34:53.383069 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:34:53.384311 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-22 19:34:53.385172 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-22 19:34:53.424622 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:34:53.476260 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:34:53.477419 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-22 19:34:53.477446 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-22 19:34:53.478148 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-22 19:34:53.479140 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-22 19:34:53.508530 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:34:53.585015 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:34:53.585539 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-22 19:34:53.587106 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-22 19:34:53.587750 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:34:53.587856 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-22 19:34:53.588448 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-22 19:34:53.588857 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:34:53.589679 | orchestrator | 2025-06-22 19:34:53.589771 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-22 19:34:53.592344 | orchestrator | Sunday 22 June 2025 19:34:53 +0000 (0:00:00.355) 0:03:47.837 *********** 2025-06-22 19:34:53.982190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:34:53.982309 | orchestrator | 2025-06-22 19:34:53.982796 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-22 19:34:53.983394 | orchestrator | Sunday 22 June 2025 19:34:53 +0000 (0:00:00.396) 0:03:48.233 *********** 2025-06-22 19:34:54.080628 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-22 19:34:54.080734 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-22 19:34:54.116051 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:34:54.154516 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-22 19:34:54.155620 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:34:54.156166 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-22 19:34:54.193070 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:34:54.247966 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:34:54.248053 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-22 19:34:54.248067 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-22 19:34:54.336853 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:34:54.337041 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:34:54.337119 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-22 19:34:54.337778 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:34:54.338146 | orchestrator | 2025-06-22 19:34:54.338538 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-22 19:34:54.339173 | orchestrator | Sunday 22 June 2025 19:34:54 +0000 (0:00:00.353) 0:03:48.587 *********** 2025-06-22 19:34:54.868398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:34:54.869362 | orchestrator | 2025-06-22 19:34:54.869629 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-22 19:34:54.870174 | orchestrator | Sunday 22 June 2025 19:34:54 +0000 (0:00:00.532) 0:03:49.119 *********** 2025-06-22 19:35:25.965810 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:35:25.965972 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:35:25.965988 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:35:25.965999 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:35:25.966010 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:35:25.966083 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:35:25.966095 | orchestrator | changed: [testbed-manager] 2025-06-22 19:35:25.966185 | orchestrator | 2025-06-22 19:35:25.966932 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-22 19:35:25.967382 | orchestrator | Sunday 22 June 2025 19:35:25 +0000 (0:00:31.089) 0:04:20.209 *********** 2025-06-22 19:35:32.907499 | orchestrator | changed: [testbed-manager] 2025-06-22 19:35:32.907692 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:35:32.912466 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:35:32.912534 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:35:32.912605 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:35:32.912623 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:35:32.912643 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:35:32.913512 | orchestrator | 2025-06-22 19:35:32.915211 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-22 19:35:32.915267 | orchestrator | Sunday 22 June 2025 19:35:32 +0000 (0:00:06.947) 0:04:27.156 *********** 2025-06-22 19:35:39.639649 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:35:39.639760 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:35:39.640382 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:35:39.641069 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:35:39.645748 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:35:39.646177 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:35:39.646994 | orchestrator | changed: [testbed-manager] 2025-06-22 19:35:39.647931 | orchestrator | 2025-06-22 19:35:39.648722 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-22 19:35:39.649661 | orchestrator | Sunday 22 June 2025 19:35:39 +0000 (0:00:06.730) 0:04:33.886 *********** 2025-06-22 19:35:41.090982 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:41.097076 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:41.097123 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:41.097169 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:41.097181 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:41.098431 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:41.099607 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:41.100283 | orchestrator | 2025-06-22 19:35:41.100769 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-22 19:35:41.101461 | orchestrator | Sunday 22 June 2025 19:35:41 +0000 (0:00:01.453) 0:04:35.340 *********** 2025-06-22 19:35:46.098789 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:35:46.098855 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:35:46.099397 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:35:46.099904 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:35:46.102203 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:35:46.103397 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:35:46.105005 | orchestrator | changed: [testbed-manager] 2025-06-22 19:35:46.105060 | orchestrator | 2025-06-22 19:35:46.105447 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-22 19:35:46.105944 | orchestrator | Sunday 22 June 2025 19:35:46 +0000 (0:00:05.007) 0:04:40.348 *********** 2025-06-22 19:35:46.482603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:35:46.483755 | orchestrator | 2025-06-22 19:35:46.485149 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-22 19:35:46.486162 | orchestrator | Sunday 22 June 2025 19:35:46 +0000 (0:00:00.385) 0:04:40.733 *********** 2025-06-22 19:35:47.159838 | orchestrator | changed: [testbed-manager] 2025-06-22 19:35:47.162629 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:35:47.165032 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:35:47.165885 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:35:47.167107 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:35:47.168217 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:35:47.169819 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:35:47.170824 | orchestrator | 2025-06-22 19:35:47.172117 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-22 19:35:47.172605 | orchestrator | Sunday 22 June 2025 19:35:47 +0000 (0:00:00.676) 0:04:41.409 *********** 2025-06-22 19:35:48.587150 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:48.589024 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:48.589735 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:48.590181 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:48.591256 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:48.591682 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:48.592526 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:48.592996 | orchestrator | 2025-06-22 19:35:48.593493 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-22 19:35:48.594280 | orchestrator | Sunday 22 June 2025 19:35:48 +0000 (0:00:01.426) 0:04:42.836 *********** 2025-06-22 19:35:49.270004 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:35:49.270820 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:35:49.271768 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:35:49.272824 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:35:49.273996 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:35:49.274697 | orchestrator | changed: [testbed-manager] 2025-06-22 19:35:49.275481 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:35:49.276878 | orchestrator | 2025-06-22 19:35:49.278222 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-22 19:35:49.278994 | orchestrator | Sunday 22 June 2025 19:35:49 +0000 (0:00:00.684) 0:04:43.521 *********** 2025-06-22 19:35:49.331486 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:49.376838 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:35:49.408945 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:35:49.440191 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:35:49.473290 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:35:49.533479 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:35:49.533935 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:35:49.535179 | orchestrator | 2025-06-22 19:35:49.535695 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-22 19:35:49.536185 | orchestrator | Sunday 22 June 2025 19:35:49 +0000 (0:00:00.264) 0:04:43.785 *********** 2025-06-22 19:35:49.609605 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:49.639405 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:35:49.667373 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:35:49.700430 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:35:49.891634 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:35:49.893160 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:35:49.896260 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:35:49.896325 | orchestrator | 2025-06-22 19:35:49.896339 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-22 19:35:49.896850 | orchestrator | Sunday 22 June 2025 19:35:49 +0000 (0:00:00.357) 0:04:44.143 *********** 2025-06-22 19:35:49.997835 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:50.030769 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:50.069441 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:50.118471 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:50.175669 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:50.176458 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:50.177366 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:50.178582 | orchestrator | 2025-06-22 19:35:50.180605 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-22 19:35:50.181205 | orchestrator | Sunday 22 June 2025 19:35:50 +0000 (0:00:00.284) 0:04:44.427 *********** 2025-06-22 19:35:50.272164 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:50.305981 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:35:50.345020 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:35:50.375597 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:35:50.424815 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:35:50.425915 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:35:50.427246 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:35:50.427986 | orchestrator | 2025-06-22 19:35:50.428636 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-22 19:35:50.429987 | orchestrator | Sunday 22 June 2025 19:35:50 +0000 (0:00:00.249) 0:04:44.677 *********** 2025-06-22 19:35:50.518848 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:50.566849 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:50.600335 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:50.639438 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:50.704067 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:50.704251 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:50.705359 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:50.706257 | orchestrator | 2025-06-22 19:35:50.706663 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-22 19:35:50.707361 | orchestrator | Sunday 22 June 2025 19:35:50 +0000 (0:00:00.278) 0:04:44.956 *********** 2025-06-22 19:35:50.805229 | orchestrator | ok: [testbed-manager] =>  2025-06-22 19:35:50.805900 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:35:50.843264 | orchestrator | ok: [testbed-node-0] =>  2025-06-22 19:35:50.843411 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:35:50.870591 | orchestrator | ok: [testbed-node-1] =>  2025-06-22 19:35:50.871381 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:35:50.905010 | orchestrator | ok: [testbed-node-2] =>  2025-06-22 19:35:50.905159 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:35:50.981086 | orchestrator | ok: [testbed-node-3] =>  2025-06-22 19:35:50.981797 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:35:50.983447 | orchestrator | ok: [testbed-node-4] =>  2025-06-22 19:35:50.984763 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:35:50.985760 | orchestrator | ok: [testbed-node-5] =>  2025-06-22 19:35:50.986826 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:35:50.987846 | orchestrator | 2025-06-22 19:35:50.988682 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-22 19:35:50.989522 | orchestrator | Sunday 22 June 2025 19:35:50 +0000 (0:00:00.276) 0:04:45.232 *********** 2025-06-22 19:35:51.192975 | orchestrator | ok: [testbed-manager] =>  2025-06-22 19:35:51.193706 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:35:51.232629 | orchestrator | ok: [testbed-node-0] =>  2025-06-22 19:35:51.232802 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:35:51.271180 | orchestrator | ok: [testbed-node-1] =>  2025-06-22 19:35:51.271389 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:35:51.308940 | orchestrator | ok: [testbed-node-2] =>  2025-06-22 19:35:51.310114 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:35:51.378463 | orchestrator | ok: [testbed-node-3] =>  2025-06-22 19:35:51.379370 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:35:51.380468 | orchestrator | ok: [testbed-node-4] =>  2025-06-22 19:35:51.381609 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:35:51.382060 | orchestrator | ok: [testbed-node-5] =>  2025-06-22 19:35:51.382877 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:35:51.383496 | orchestrator | 2025-06-22 19:35:51.384033 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-22 19:35:51.384533 | orchestrator | Sunday 22 June 2025 19:35:51 +0000 (0:00:00.397) 0:04:45.630 *********** 2025-06-22 19:35:51.481816 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:51.511815 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:35:51.548170 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:35:51.581907 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:35:51.644761 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:35:51.645810 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:35:51.649509 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:35:51.649580 | orchestrator | 2025-06-22 19:35:51.649604 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-22 19:35:51.649625 | orchestrator | Sunday 22 June 2025 19:35:51 +0000 (0:00:00.267) 0:04:45.897 *********** 2025-06-22 19:35:51.737928 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:51.776664 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:35:51.814208 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:35:51.852596 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:35:51.890069 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:35:51.951816 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:35:51.952134 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:35:51.952910 | orchestrator | 2025-06-22 19:35:51.953609 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-22 19:35:51.954101 | orchestrator | Sunday 22 June 2025 19:35:51 +0000 (0:00:00.307) 0:04:46.204 *********** 2025-06-22 19:35:52.406363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:35:52.408583 | orchestrator | 2025-06-22 19:35:52.408618 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-22 19:35:52.408633 | orchestrator | Sunday 22 June 2025 19:35:52 +0000 (0:00:00.450) 0:04:46.655 *********** 2025-06-22 19:35:53.190159 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:53.190319 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:53.191389 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:53.191709 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:53.193048 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:53.193228 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:53.193786 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:53.194705 | orchestrator | 2025-06-22 19:35:53.195046 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-22 19:35:53.195602 | orchestrator | Sunday 22 June 2025 19:35:53 +0000 (0:00:00.783) 0:04:47.439 *********** 2025-06-22 19:35:55.812412 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:55.812654 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:55.813180 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:55.813727 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:55.813976 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:55.814577 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:55.815533 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:55.815796 | orchestrator | 2025-06-22 19:35:55.815999 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-22 19:35:55.816455 | orchestrator | Sunday 22 June 2025 19:35:55 +0000 (0:00:02.624) 0:04:50.064 *********** 2025-06-22 19:35:55.887207 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-22 19:35:55.887358 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-22 19:35:55.888114 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-22 19:35:55.955921 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:55.956440 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-22 19:35:55.956855 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-22 19:35:56.034810 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-22 19:35:56.035003 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-22 19:35:56.035820 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-22 19:35:56.039262 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-22 19:35:56.108507 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:35:56.108652 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-22 19:35:56.109375 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-22 19:35:56.110247 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-22 19:35:56.332346 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:35:56.332757 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-22 19:35:56.333485 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-22 19:35:56.334445 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-22 19:35:56.401667 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:35:56.401837 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-22 19:35:56.402498 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-22 19:35:56.527594 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:35:56.528381 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-22 19:35:56.529234 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:35:56.529899 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-22 19:35:56.530363 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-22 19:35:56.531135 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-22 19:35:56.531531 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:35:56.532471 | orchestrator | 2025-06-22 19:35:56.532760 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-22 19:35:56.533276 | orchestrator | Sunday 22 June 2025 19:35:56 +0000 (0:00:00.713) 0:04:50.777 *********** 2025-06-22 19:36:02.066379 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:02.067178 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:02.068841 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:02.069976 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:02.071290 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:02.071989 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:02.073301 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:02.074299 | orchestrator | 2025-06-22 19:36:02.075099 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-22 19:36:02.076042 | orchestrator | Sunday 22 June 2025 19:36:02 +0000 (0:00:05.539) 0:04:56.316 *********** 2025-06-22 19:36:03.057233 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:03.057476 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:03.057778 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:03.059838 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:03.060643 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:03.061486 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:03.062653 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:03.063347 | orchestrator | 2025-06-22 19:36:03.064174 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-22 19:36:03.064697 | orchestrator | Sunday 22 June 2025 19:36:03 +0000 (0:00:00.989) 0:04:57.306 *********** 2025-06-22 19:36:09.773903 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:09.774541 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:09.775776 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:09.778221 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:09.779736 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:09.780608 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:09.782063 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:09.782996 | orchestrator | 2025-06-22 19:36:09.783875 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-22 19:36:09.784784 | orchestrator | Sunday 22 June 2025 19:36:09 +0000 (0:00:06.716) 0:05:04.022 *********** 2025-06-22 19:36:12.451948 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:12.453849 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:12.454264 | orchestrator | changed: [testbed-manager] 2025-06-22 19:36:12.456828 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:12.456982 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:12.458636 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:12.459486 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:12.460657 | orchestrator | 2025-06-22 19:36:12.461598 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-22 19:36:12.462367 | orchestrator | Sunday 22 June 2025 19:36:12 +0000 (0:00:02.679) 0:05:06.702 *********** 2025-06-22 19:36:13.887939 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:13.888046 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:13.888278 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:13.890680 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:13.890787 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:13.890813 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:13.891302 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:13.891866 | orchestrator | 2025-06-22 19:36:13.892636 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-22 19:36:13.893317 | orchestrator | Sunday 22 June 2025 19:36:13 +0000 (0:00:01.434) 0:05:08.136 *********** 2025-06-22 19:36:15.077329 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:15.079587 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:15.079628 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:15.079639 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:15.080857 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:15.082051 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:15.082658 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:15.083457 | orchestrator | 2025-06-22 19:36:15.083896 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-22 19:36:15.084634 | orchestrator | Sunday 22 June 2025 19:36:15 +0000 (0:00:01.190) 0:05:09.326 *********** 2025-06-22 19:36:15.278693 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:36:15.349472 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:36:15.413974 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:36:15.480904 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:36:15.653101 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:36:15.653513 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:36:15.656472 | orchestrator | changed: [testbed-manager] 2025-06-22 19:36:15.656677 | orchestrator | 2025-06-22 19:36:15.658589 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-22 19:36:15.659490 | orchestrator | Sunday 22 June 2025 19:36:15 +0000 (0:00:00.575) 0:05:09.902 *********** 2025-06-22 19:36:24.692307 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:24.693481 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:24.694317 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:24.696086 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:24.697248 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:24.697336 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:24.698199 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:24.698652 | orchestrator | 2025-06-22 19:36:24.699349 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-22 19:36:24.700341 | orchestrator | Sunday 22 June 2025 19:36:24 +0000 (0:00:09.038) 0:05:18.941 *********** 2025-06-22 19:36:25.607378 | orchestrator | changed: [testbed-manager] 2025-06-22 19:36:25.608179 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:25.608979 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:25.609045 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:25.609894 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:25.610801 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:25.611164 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:25.611534 | orchestrator | 2025-06-22 19:36:25.611994 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-22 19:36:25.612653 | orchestrator | Sunday 22 June 2025 19:36:25 +0000 (0:00:00.916) 0:05:19.858 *********** 2025-06-22 19:36:33.740728 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:33.741488 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:33.743137 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:33.745175 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:33.745916 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:33.747201 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:33.748241 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:33.748758 | orchestrator | 2025-06-22 19:36:33.749637 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-22 19:36:33.750629 | orchestrator | Sunday 22 June 2025 19:36:33 +0000 (0:00:08.133) 0:05:27.991 *********** 2025-06-22 19:36:43.940353 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:43.940817 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:43.942726 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:43.943726 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:43.944770 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:43.945748 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:43.946715 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:43.947402 | orchestrator | 2025-06-22 19:36:43.948009 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-22 19:36:43.948810 | orchestrator | Sunday 22 June 2025 19:36:43 +0000 (0:00:10.195) 0:05:38.187 *********** 2025-06-22 19:36:44.352870 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-22 19:36:45.148399 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-22 19:36:45.148615 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-22 19:36:45.148809 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-22 19:36:45.149692 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-22 19:36:45.150192 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-22 19:36:45.150962 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-22 19:36:45.151766 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-22 19:36:45.152986 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-22 19:36:45.153784 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-22 19:36:45.154756 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-22 19:36:45.155429 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-22 19:36:45.156359 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-22 19:36:45.157056 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-22 19:36:45.157843 | orchestrator | 2025-06-22 19:36:45.158581 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-22 19:36:45.159076 | orchestrator | Sunday 22 June 2025 19:36:45 +0000 (0:00:01.210) 0:05:39.397 *********** 2025-06-22 19:36:45.280102 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:45.343249 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:36:45.415139 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:36:45.476425 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:36:45.538696 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:36:45.654070 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:36:45.654273 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:36:45.654641 | orchestrator | 2025-06-22 19:36:45.655657 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-22 19:36:45.655905 | orchestrator | Sunday 22 June 2025 19:36:45 +0000 (0:00:00.509) 0:05:39.906 *********** 2025-06-22 19:36:49.408659 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:49.409197 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:49.410147 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:49.411897 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:49.413246 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:49.414226 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:49.414876 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:49.415380 | orchestrator | 2025-06-22 19:36:49.416146 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-22 19:36:49.416818 | orchestrator | Sunday 22 June 2025 19:36:49 +0000 (0:00:03.751) 0:05:43.658 *********** 2025-06-22 19:36:49.547873 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:49.609858 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:36:49.672336 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:36:49.741854 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:36:49.806252 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:36:49.909476 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:36:49.910395 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:36:49.911469 | orchestrator | 2025-06-22 19:36:49.912539 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-22 19:36:49.914161 | orchestrator | Sunday 22 June 2025 19:36:49 +0000 (0:00:00.501) 0:05:44.160 *********** 2025-06-22 19:36:49.986236 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-22 19:36:49.986626 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-22 19:36:50.059078 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:50.060273 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-22 19:36:50.061317 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-22 19:36:50.128408 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:36:50.129498 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-22 19:36:50.130642 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-22 19:36:50.205065 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:36:50.206137 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-22 19:36:50.207234 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-22 19:36:50.275013 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:36:50.276407 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-22 19:36:50.280099 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-22 19:36:50.343312 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:36:50.345298 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-22 19:36:50.346165 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-22 19:36:50.464620 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:36:50.468284 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-22 19:36:50.468314 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-22 19:36:50.468792 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:36:50.469753 | orchestrator | 2025-06-22 19:36:50.470847 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-22 19:36:50.471897 | orchestrator | Sunday 22 June 2025 19:36:50 +0000 (0:00:00.555) 0:05:44.715 *********** 2025-06-22 19:36:50.590826 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:50.660333 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:36:50.722894 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:36:50.785642 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:36:50.852887 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:36:50.956914 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:36:50.957931 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:36:50.958593 | orchestrator | 2025-06-22 19:36:50.962219 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-22 19:36:50.962321 | orchestrator | Sunday 22 June 2025 19:36:50 +0000 (0:00:00.492) 0:05:45.207 *********** 2025-06-22 19:36:51.096796 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:51.158921 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:36:51.221506 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:36:51.289085 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:36:51.352949 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:36:51.442477 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:36:51.443168 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:36:51.449174 | orchestrator | 2025-06-22 19:36:51.449212 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-22 19:36:51.449227 | orchestrator | Sunday 22 June 2025 19:36:51 +0000 (0:00:00.484) 0:05:45.692 *********** 2025-06-22 19:36:51.581308 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:51.647474 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:36:51.714969 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:36:51.943979 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:36:52.015280 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:36:52.126631 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:36:52.127616 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:36:52.128989 | orchestrator | 2025-06-22 19:36:52.129975 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-22 19:36:52.131606 | orchestrator | Sunday 22 June 2025 19:36:52 +0000 (0:00:00.684) 0:05:46.377 *********** 2025-06-22 19:36:53.698375 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:53.698471 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:36:53.698484 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:36:53.698495 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:36:53.698641 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:36:53.700013 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:36:53.700522 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:36:53.700934 | orchestrator | 2025-06-22 19:36:53.701282 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-22 19:36:53.701967 | orchestrator | Sunday 22 June 2025 19:36:53 +0000 (0:00:01.568) 0:05:47.945 *********** 2025-06-22 19:36:54.605078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:36:54.605587 | orchestrator | 2025-06-22 19:36:54.606429 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-22 19:36:54.607071 | orchestrator | Sunday 22 June 2025 19:36:54 +0000 (0:00:00.909) 0:05:48.855 *********** 2025-06-22 19:36:55.006705 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:55.392614 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:55.393200 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:55.394451 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:55.395229 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:55.396492 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:55.397978 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:55.399092 | orchestrator | 2025-06-22 19:36:55.399540 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-22 19:36:55.400314 | orchestrator | Sunday 22 June 2025 19:36:55 +0000 (0:00:00.787) 0:05:49.642 *********** 2025-06-22 19:36:55.878775 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:55.952024 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:56.415509 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:56.415734 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:56.417122 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:56.417887 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:56.420521 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:56.421154 | orchestrator | 2025-06-22 19:36:56.421908 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-22 19:36:56.422616 | orchestrator | Sunday 22 June 2025 19:36:56 +0000 (0:00:01.024) 0:05:50.666 *********** 2025-06-22 19:36:57.673750 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:57.675088 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:57.676507 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:57.677721 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:57.678733 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:57.679446 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:57.680219 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:57.681224 | orchestrator | 2025-06-22 19:36:57.681844 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-22 19:36:57.682354 | orchestrator | Sunday 22 June 2025 19:36:57 +0000 (0:00:01.256) 0:05:51.923 *********** 2025-06-22 19:36:57.801728 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:58.969986 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:36:58.970706 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:36:58.973852 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:36:58.974837 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:36:58.975694 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:36:58.976708 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:36:58.977042 | orchestrator | 2025-06-22 19:36:58.977530 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-22 19:36:58.978264 | orchestrator | Sunday 22 June 2025 19:36:58 +0000 (0:00:01.296) 0:05:53.219 *********** 2025-06-22 19:37:00.306990 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:00.307087 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:00.307670 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:00.307946 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:00.308425 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:00.309197 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:00.309382 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:00.310115 | orchestrator | 2025-06-22 19:37:00.310459 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-22 19:37:00.310532 | orchestrator | Sunday 22 June 2025 19:37:00 +0000 (0:00:01.337) 0:05:54.556 *********** 2025-06-22 19:37:01.609786 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:01.610739 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:01.612264 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:01.612581 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:01.613486 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:01.614945 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:01.615773 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:01.617031 | orchestrator | 2025-06-22 19:37:01.617993 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-22 19:37:01.618715 | orchestrator | Sunday 22 June 2025 19:37:01 +0000 (0:00:01.302) 0:05:55.859 *********** 2025-06-22 19:37:02.628382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:37:02.629790 | orchestrator | 2025-06-22 19:37:02.630182 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-22 19:37:02.631044 | orchestrator | Sunday 22 June 2025 19:37:02 +0000 (0:00:01.019) 0:05:56.878 *********** 2025-06-22 19:37:03.903497 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:03.906298 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:03.906349 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:03.906361 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:03.906603 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:03.907136 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:03.908281 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:03.908505 | orchestrator | 2025-06-22 19:37:03.909216 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-22 19:37:03.909708 | orchestrator | Sunday 22 June 2025 19:37:03 +0000 (0:00:01.275) 0:05:58.153 *********** 2025-06-22 19:37:04.972777 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:04.972986 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:04.974217 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:04.975275 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:04.976353 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:04.977129 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:04.977884 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:04.978397 | orchestrator | 2025-06-22 19:37:04.979495 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-22 19:37:04.979703 | orchestrator | Sunday 22 June 2025 19:37:04 +0000 (0:00:01.066) 0:05:59.220 *********** 2025-06-22 19:37:06.369194 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:06.369579 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:06.371252 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:06.371684 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:06.372435 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:06.373138 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:06.373607 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:06.374435 | orchestrator | 2025-06-22 19:37:06.375144 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-22 19:37:06.375531 | orchestrator | Sunday 22 June 2025 19:37:06 +0000 (0:00:01.398) 0:06:00.618 *********** 2025-06-22 19:37:07.435678 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:07.435847 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:07.438698 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:07.438747 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:07.438760 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:07.439610 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:07.440485 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:07.441229 | orchestrator | 2025-06-22 19:37:07.442159 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-22 19:37:07.443155 | orchestrator | Sunday 22 June 2025 19:37:07 +0000 (0:00:01.065) 0:06:01.684 *********** 2025-06-22 19:37:08.695138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:37:08.695408 | orchestrator | 2025-06-22 19:37:08.696208 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:37:08.697061 | orchestrator | Sunday 22 June 2025 19:37:08 +0000 (0:00:00.969) 0:06:02.653 *********** 2025-06-22 19:37:08.698285 | orchestrator | 2025-06-22 19:37:08.698446 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:37:08.698796 | orchestrator | Sunday 22 June 2025 19:37:08 +0000 (0:00:00.039) 0:06:02.693 *********** 2025-06-22 19:37:08.700185 | orchestrator | 2025-06-22 19:37:08.700655 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:37:08.701026 | orchestrator | Sunday 22 June 2025 19:37:08 +0000 (0:00:00.045) 0:06:02.738 *********** 2025-06-22 19:37:08.701800 | orchestrator | 2025-06-22 19:37:08.702079 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:37:08.702476 | orchestrator | Sunday 22 June 2025 19:37:08 +0000 (0:00:00.044) 0:06:02.782 *********** 2025-06-22 19:37:08.703301 | orchestrator | 2025-06-22 19:37:08.703498 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:37:08.703873 | orchestrator | Sunday 22 June 2025 19:37:08 +0000 (0:00:00.041) 0:06:02.823 *********** 2025-06-22 19:37:08.704421 | orchestrator | 2025-06-22 19:37:08.705348 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:37:08.705972 | orchestrator | Sunday 22 June 2025 19:37:08 +0000 (0:00:00.044) 0:06:02.868 *********** 2025-06-22 19:37:08.706341 | orchestrator | 2025-06-22 19:37:08.706967 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:37:08.707228 | orchestrator | Sunday 22 June 2025 19:37:08 +0000 (0:00:00.038) 0:06:02.906 *********** 2025-06-22 19:37:08.707701 | orchestrator | 2025-06-22 19:37:08.708116 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-22 19:37:08.708425 | orchestrator | Sunday 22 June 2025 19:37:08 +0000 (0:00:00.037) 0:06:02.944 *********** 2025-06-22 19:37:09.905388 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:09.905585 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:09.906306 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:09.906644 | orchestrator | 2025-06-22 19:37:09.907463 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-22 19:37:09.907770 | orchestrator | Sunday 22 June 2025 19:37:09 +0000 (0:00:01.210) 0:06:04.155 *********** 2025-06-22 19:37:11.176353 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:11.176459 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:11.176475 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:11.176486 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:11.176948 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:11.179452 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:11.180205 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:11.180457 | orchestrator | 2025-06-22 19:37:11.181138 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-22 19:37:11.182374 | orchestrator | Sunday 22 June 2025 19:37:11 +0000 (0:00:01.263) 0:06:05.418 *********** 2025-06-22 19:37:13.222413 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:13.222728 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:13.225304 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:13.227944 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:13.228674 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:13.229779 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:13.231311 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:13.231994 | orchestrator | 2025-06-22 19:37:13.232837 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-22 19:37:13.233280 | orchestrator | Sunday 22 June 2025 19:37:13 +0000 (0:00:02.053) 0:06:07.472 *********** 2025-06-22 19:37:13.354157 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:15.254416 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:15.254519 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:15.255283 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:15.257904 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:15.258672 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:15.259419 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:15.260690 | orchestrator | 2025-06-22 19:37:15.262798 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-22 19:37:15.263475 | orchestrator | Sunday 22 June 2025 19:37:15 +0000 (0:00:02.030) 0:06:09.502 *********** 2025-06-22 19:37:15.351971 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:15.352188 | orchestrator | 2025-06-22 19:37:15.353143 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-22 19:37:15.354080 | orchestrator | Sunday 22 June 2025 19:37:15 +0000 (0:00:00.101) 0:06:09.603 *********** 2025-06-22 19:37:16.270722 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:16.271402 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:16.272949 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:16.273872 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:16.276056 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:16.276845 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:16.277176 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:16.278174 | orchestrator | 2025-06-22 19:37:16.278532 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-22 19:37:16.279533 | orchestrator | Sunday 22 June 2025 19:37:16 +0000 (0:00:00.917) 0:06:10.521 *********** 2025-06-22 19:37:16.569040 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:16.633166 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:16.697790 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:16.765928 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:16.828639 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:16.937093 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:16.937460 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:16.938768 | orchestrator | 2025-06-22 19:37:16.940061 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-22 19:37:16.940740 | orchestrator | Sunday 22 June 2025 19:37:16 +0000 (0:00:00.666) 0:06:11.187 *********** 2025-06-22 19:37:17.776495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:37:17.777720 | orchestrator | 2025-06-22 19:37:17.778489 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-22 19:37:17.782146 | orchestrator | Sunday 22 June 2025 19:37:17 +0000 (0:00:00.841) 0:06:12.029 *********** 2025-06-22 19:37:18.182742 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:18.558705 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:18.559858 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:18.561414 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:18.562351 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:18.563302 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:18.564284 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:18.565129 | orchestrator | 2025-06-22 19:37:18.566229 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-22 19:37:18.566926 | orchestrator | Sunday 22 June 2025 19:37:18 +0000 (0:00:00.780) 0:06:12.809 *********** 2025-06-22 19:37:20.982067 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-22 19:37:20.982744 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-22 19:37:20.984115 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-22 19:37:20.986904 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-22 19:37:20.986995 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-22 19:37:20.987923 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-22 19:37:20.989108 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-22 19:37:20.989750 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-22 19:37:20.990267 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-22 19:37:20.990736 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-22 19:37:20.991712 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-22 19:37:20.994470 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-22 19:37:20.994528 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-22 19:37:20.994610 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-22 19:37:20.994627 | orchestrator | 2025-06-22 19:37:20.994639 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-22 19:37:20.994679 | orchestrator | Sunday 22 June 2025 19:37:20 +0000 (0:00:02.422) 0:06:15.231 *********** 2025-06-22 19:37:21.117033 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:21.178473 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:21.248332 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:21.310253 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:21.373702 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:21.481762 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:21.482433 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:21.483352 | orchestrator | 2025-06-22 19:37:21.484086 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-22 19:37:21.484991 | orchestrator | Sunday 22 June 2025 19:37:21 +0000 (0:00:00.502) 0:06:15.733 *********** 2025-06-22 19:37:22.291262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:37:22.294469 | orchestrator | 2025-06-22 19:37:22.295429 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-22 19:37:22.298205 | orchestrator | Sunday 22 June 2025 19:37:22 +0000 (0:00:00.806) 0:06:16.540 *********** 2025-06-22 19:37:22.831884 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:22.899705 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:23.314283 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:23.314975 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:23.316032 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:23.318546 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:23.319102 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:23.320232 | orchestrator | 2025-06-22 19:37:23.320967 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-22 19:37:23.321636 | orchestrator | Sunday 22 June 2025 19:37:23 +0000 (0:00:01.022) 0:06:17.563 *********** 2025-06-22 19:37:23.760344 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:24.145040 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:24.145143 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:24.145638 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:24.146406 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:24.147306 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:24.148948 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:24.149924 | orchestrator | 2025-06-22 19:37:24.150468 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-22 19:37:24.151279 | orchestrator | Sunday 22 June 2025 19:37:24 +0000 (0:00:00.830) 0:06:18.394 *********** 2025-06-22 19:37:24.278762 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:24.343643 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:24.410530 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:24.481541 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:24.546214 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:24.636466 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:24.637577 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:24.638674 | orchestrator | 2025-06-22 19:37:24.644198 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-22 19:37:24.644245 | orchestrator | Sunday 22 June 2025 19:37:24 +0000 (0:00:00.493) 0:06:18.887 *********** 2025-06-22 19:37:25.838174 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:25.839302 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:25.840219 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:25.840905 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:25.841952 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:25.842463 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:25.843186 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:25.843648 | orchestrator | 2025-06-22 19:37:25.844334 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-22 19:37:25.844740 | orchestrator | Sunday 22 June 2025 19:37:25 +0000 (0:00:01.201) 0:06:20.088 *********** 2025-06-22 19:37:25.964797 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:26.032826 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:26.103539 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:26.164782 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:26.230949 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:26.311224 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:26.312113 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:26.312151 | orchestrator | 2025-06-22 19:37:26.312816 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-22 19:37:26.316695 | orchestrator | Sunday 22 June 2025 19:37:26 +0000 (0:00:00.472) 0:06:20.561 *********** 2025-06-22 19:37:32.652180 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:32.652311 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:32.652622 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:32.652804 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:32.653490 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:32.653719 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:32.654318 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:32.659479 | orchestrator | 2025-06-22 19:37:32.659506 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-22 19:37:32.659519 | orchestrator | Sunday 22 June 2025 19:37:32 +0000 (0:00:06.339) 0:06:26.901 *********** 2025-06-22 19:37:33.884173 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:33.885235 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:33.886302 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:33.887243 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:33.887936 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:33.888495 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:33.889123 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:33.889763 | orchestrator | 2025-06-22 19:37:33.890593 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-22 19:37:33.891108 | orchestrator | Sunday 22 June 2025 19:37:33 +0000 (0:00:01.234) 0:06:28.135 *********** 2025-06-22 19:37:35.540452 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:35.542170 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:35.542202 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:35.543932 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:35.544767 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:35.545929 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:35.546524 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:35.547800 | orchestrator | 2025-06-22 19:37:35.548251 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-22 19:37:35.549153 | orchestrator | Sunday 22 June 2025 19:37:35 +0000 (0:00:01.654) 0:06:29.789 *********** 2025-06-22 19:37:37.058138 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:37.059627 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:37.060903 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:37.062466 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:37.063434 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:37.064685 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:37.065929 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:37.066910 | orchestrator | 2025-06-22 19:37:37.067808 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 19:37:37.069188 | orchestrator | Sunday 22 June 2025 19:37:37 +0000 (0:00:01.518) 0:06:31.307 *********** 2025-06-22 19:37:37.459686 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:38.052595 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:38.054482 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:38.055283 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:38.056462 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:38.057053 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:38.057901 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:38.060027 | orchestrator | 2025-06-22 19:37:38.060780 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 19:37:38.061634 | orchestrator | Sunday 22 June 2025 19:37:38 +0000 (0:00:00.996) 0:06:32.304 *********** 2025-06-22 19:37:38.192294 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:38.256508 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:38.322382 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:38.383770 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:38.452176 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:38.841821 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:38.841993 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:38.842686 | orchestrator | 2025-06-22 19:37:38.843789 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-22 19:37:38.845196 | orchestrator | Sunday 22 June 2025 19:37:38 +0000 (0:00:00.786) 0:06:33.091 *********** 2025-06-22 19:37:38.974897 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:39.036829 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:39.109186 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:39.172520 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:39.235155 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:39.359096 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:39.359255 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:39.360666 | orchestrator | 2025-06-22 19:37:39.361328 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-22 19:37:39.364817 | orchestrator | Sunday 22 June 2025 19:37:39 +0000 (0:00:00.519) 0:06:33.611 *********** 2025-06-22 19:37:39.497129 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:39.566106 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:39.629440 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:39.694106 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:39.933287 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:40.038002 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:40.039089 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:40.039126 | orchestrator | 2025-06-22 19:37:40.039432 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-22 19:37:40.040678 | orchestrator | Sunday 22 June 2025 19:37:40 +0000 (0:00:00.677) 0:06:34.288 *********** 2025-06-22 19:37:40.178991 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:40.241125 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:40.305601 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:40.374808 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:40.439679 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:40.544202 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:40.544958 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:40.546132 | orchestrator | 2025-06-22 19:37:40.547744 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-22 19:37:40.547774 | orchestrator | Sunday 22 June 2025 19:37:40 +0000 (0:00:00.505) 0:06:34.794 *********** 2025-06-22 19:37:40.686303 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:40.753867 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:40.845895 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:40.912850 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:40.977487 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:41.106076 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:41.108226 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:41.108267 | orchestrator | 2025-06-22 19:37:41.108286 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-22 19:37:41.113300 | orchestrator | Sunday 22 June 2025 19:37:41 +0000 (0:00:00.558) 0:06:35.353 *********** 2025-06-22 19:37:46.466525 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:46.466819 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:46.466850 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:46.468184 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:46.468263 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:46.469273 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:46.472254 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:46.472714 | orchestrator | 2025-06-22 19:37:46.473181 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-22 19:37:46.473830 | orchestrator | Sunday 22 June 2025 19:37:46 +0000 (0:00:05.364) 0:06:40.717 *********** 2025-06-22 19:37:46.599384 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:46.661256 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:46.724180 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:46.792885 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:46.865441 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:46.980873 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:46.981213 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:46.982417 | orchestrator | 2025-06-22 19:37:46.986090 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-22 19:37:46.986162 | orchestrator | Sunday 22 June 2025 19:37:46 +0000 (0:00:00.513) 0:06:41.231 *********** 2025-06-22 19:37:47.937838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:37:47.938618 | orchestrator | 2025-06-22 19:37:47.941374 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-22 19:37:47.942009 | orchestrator | Sunday 22 June 2025 19:37:47 +0000 (0:00:00.957) 0:06:42.188 *********** 2025-06-22 19:37:49.468605 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:49.469389 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:49.469546 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:49.469888 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:49.470296 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:49.470709 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:49.471454 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:49.471648 | orchestrator | 2025-06-22 19:37:49.474634 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-22 19:37:49.475226 | orchestrator | Sunday 22 June 2025 19:37:49 +0000 (0:00:01.529) 0:06:43.717 *********** 2025-06-22 19:37:50.525442 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:50.525874 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:50.526379 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:50.527244 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:50.530672 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:50.530693 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:50.530705 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:50.530755 | orchestrator | 2025-06-22 19:37:50.531694 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-22 19:37:50.532118 | orchestrator | Sunday 22 June 2025 19:37:50 +0000 (0:00:01.059) 0:06:44.776 *********** 2025-06-22 19:37:51.522844 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:51.522992 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:51.524070 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:51.524410 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:51.525488 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:51.526132 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:51.526674 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:51.527462 | orchestrator | 2025-06-22 19:37:51.528773 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-22 19:37:51.528821 | orchestrator | Sunday 22 June 2025 19:37:51 +0000 (0:00:00.993) 0:06:45.769 *********** 2025-06-22 19:37:53.138479 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:37:53.139008 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:37:53.139050 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:37:53.143402 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:37:53.143512 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:37:53.144628 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:37:53.145465 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:37:53.145924 | orchestrator | 2025-06-22 19:37:53.146925 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-22 19:37:53.147285 | orchestrator | Sunday 22 June 2025 19:37:53 +0000 (0:00:01.616) 0:06:47.386 *********** 2025-06-22 19:37:53.918988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:37:53.919790 | orchestrator | 2025-06-22 19:37:53.921026 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-22 19:37:53.924600 | orchestrator | Sunday 22 June 2025 19:37:53 +0000 (0:00:00.782) 0:06:48.169 *********** 2025-06-22 19:38:01.910957 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:01.911060 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:01.911846 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:01.912930 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:01.913539 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:01.914465 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:01.915462 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:01.916207 | orchestrator | 2025-06-22 19:38:01.916844 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-22 19:38:01.917584 | orchestrator | Sunday 22 June 2025 19:38:01 +0000 (0:00:07.990) 0:06:56.160 *********** 2025-06-22 19:38:03.419960 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:03.420091 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:03.420410 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:03.421361 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:03.422206 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:03.422976 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:03.423430 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:03.424536 | orchestrator | 2025-06-22 19:38:03.425010 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-22 19:38:03.425506 | orchestrator | Sunday 22 June 2025 19:38:03 +0000 (0:00:01.509) 0:06:57.669 *********** 2025-06-22 19:38:04.596915 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:04.598628 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:04.598663 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:04.599354 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:04.599792 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:04.600881 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:04.601680 | orchestrator | 2025-06-22 19:38:04.602542 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-22 19:38:04.603027 | orchestrator | Sunday 22 June 2025 19:38:04 +0000 (0:00:01.176) 0:06:58.846 *********** 2025-06-22 19:38:05.965142 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:05.966416 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:05.966682 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:05.967935 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:05.968938 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:05.969630 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:05.970588 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:05.971306 | orchestrator | 2025-06-22 19:38:05.971982 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-22 19:38:05.972825 | orchestrator | 2025-06-22 19:38:05.973725 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-22 19:38:05.974181 | orchestrator | Sunday 22 June 2025 19:38:05 +0000 (0:00:01.370) 0:07:00.217 *********** 2025-06-22 19:38:06.075623 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:06.125897 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:06.179214 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:06.229625 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:06.279838 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:06.370228 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:06.370945 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:06.372364 | orchestrator | 2025-06-22 19:38:06.373253 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-22 19:38:06.374240 | orchestrator | 2025-06-22 19:38:06.374920 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-22 19:38:06.376039 | orchestrator | Sunday 22 June 2025 19:38:06 +0000 (0:00:00.404) 0:07:00.621 *********** 2025-06-22 19:38:07.524535 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:07.525654 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:07.525755 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:07.526770 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:07.527373 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:07.528077 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:07.528714 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:07.529257 | orchestrator | 2025-06-22 19:38:07.529845 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-22 19:38:07.530622 | orchestrator | Sunday 22 June 2025 19:38:07 +0000 (0:00:01.154) 0:07:01.776 *********** 2025-06-22 19:38:08.772902 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:08.773087 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:08.773822 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:08.777716 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:08.778451 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:08.779236 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:08.779705 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:08.780499 | orchestrator | 2025-06-22 19:38:08.780871 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-22 19:38:08.781465 | orchestrator | Sunday 22 June 2025 19:38:08 +0000 (0:00:01.247) 0:07:03.023 *********** 2025-06-22 19:38:09.008489 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:09.062966 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:09.120114 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:09.174290 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:09.228436 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:09.572993 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:09.573612 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:09.579423 | orchestrator | 2025-06-22 19:38:09.579463 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-22 19:38:09.579476 | orchestrator | Sunday 22 June 2025 19:38:09 +0000 (0:00:00.802) 0:07:03.826 *********** 2025-06-22 19:38:10.681104 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:10.682828 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:10.686903 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:10.689620 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:10.689733 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:10.691132 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:10.691763 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:10.692469 | orchestrator | 2025-06-22 19:38:10.693381 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-22 19:38:10.694626 | orchestrator | 2025-06-22 19:38:10.694711 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-22 19:38:10.695395 | orchestrator | Sunday 22 June 2025 19:38:10 +0000 (0:00:01.103) 0:07:04.929 *********** 2025-06-22 19:38:11.690701 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:38:11.691333 | orchestrator | 2025-06-22 19:38:11.691370 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-22 19:38:11.691385 | orchestrator | Sunday 22 June 2025 19:38:11 +0000 (0:00:01.010) 0:07:05.939 *********** 2025-06-22 19:38:12.101309 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:12.502988 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:12.503088 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:12.504466 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:12.505527 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:12.506506 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:12.507299 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:12.507993 | orchestrator | 2025-06-22 19:38:12.508874 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-22 19:38:12.508895 | orchestrator | Sunday 22 June 2025 19:38:12 +0000 (0:00:00.810) 0:07:06.750 *********** 2025-06-22 19:38:13.563325 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:13.564283 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:13.565572 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:13.567177 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:13.567771 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:13.568674 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:13.569711 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:13.570824 | orchestrator | 2025-06-22 19:38:13.572315 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-22 19:38:13.572692 | orchestrator | Sunday 22 June 2025 19:38:13 +0000 (0:00:01.063) 0:07:07.813 *********** 2025-06-22 19:38:14.541875 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:38:14.542275 | orchestrator | 2025-06-22 19:38:14.545990 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-22 19:38:14.546068 | orchestrator | Sunday 22 June 2025 19:38:14 +0000 (0:00:00.976) 0:07:08.790 *********** 2025-06-22 19:38:14.947595 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:15.329041 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:15.329866 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:15.331296 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:15.331999 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:15.332686 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:15.333583 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:15.334763 | orchestrator | 2025-06-22 19:38:15.335125 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-22 19:38:15.336038 | orchestrator | Sunday 22 June 2025 19:38:15 +0000 (0:00:00.785) 0:07:09.575 *********** 2025-06-22 19:38:15.738162 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:16.412369 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:16.412533 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:16.413305 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:16.414747 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:16.416723 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:16.417684 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:16.418981 | orchestrator | 2025-06-22 19:38:16.420148 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:38:16.421424 | orchestrator | 2025-06-22 19:38:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:38:16.421449 | orchestrator | 2025-06-22 19:38:16 | INFO  | Please wait and do not abort execution. 2025-06-22 19:38:16.422694 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-22 19:38:16.424081 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-22 19:38:16.424779 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:38:16.426232 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:38:16.426633 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:38:16.427706 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:38:16.428256 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:38:16.429044 | orchestrator | 2025-06-22 19:38:16.429813 | orchestrator | 2025-06-22 19:38:16.430293 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:38:16.430920 | orchestrator | Sunday 22 June 2025 19:38:16 +0000 (0:00:01.086) 0:07:10.662 *********** 2025-06-22 19:38:16.431577 | orchestrator | =============================================================================== 2025-06-22 19:38:16.432061 | orchestrator | osism.commons.packages : Install required packages --------------------- 68.31s 2025-06-22 19:38:16.432845 | orchestrator | osism.commons.packages : Download required packages -------------------- 33.56s 2025-06-22 19:38:16.433245 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 31.09s 2025-06-22 19:38:16.433788 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.18s 2025-06-22 19:38:16.434634 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.88s 2025-06-22 19:38:16.435020 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.69s 2025-06-22 19:38:16.435569 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.20s 2025-06-22 19:38:16.436015 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.04s 2025-06-22 19:38:16.436692 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.37s 2025-06-22 19:38:16.437104 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.13s 2025-06-22 19:38:16.437391 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 7.99s 2025-06-22 19:38:16.438120 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 6.95s 2025-06-22 19:38:16.438474 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 6.73s 2025-06-22 19:38:16.438981 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.72s 2025-06-22 19:38:16.439395 | orchestrator | osism.services.rng : Install rng package -------------------------------- 6.51s 2025-06-22 19:38:16.440074 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 6.34s 2025-06-22 19:38:16.440494 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.57s 2025-06-22 19:38:16.440844 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.54s 2025-06-22 19:38:16.441388 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.36s 2025-06-22 19:38:16.441917 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.29s 2025-06-22 19:38:17.091850 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-22 19:38:17.091961 | orchestrator | + osism apply network 2025-06-22 19:38:19.256645 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:38:19.256971 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:38:19.257001 | orchestrator | Registering Redlock._release_script 2025-06-22 19:38:19.322217 | orchestrator | 2025-06-22 19:38:19 | INFO  | Task face6c66-37ec-4f92-8e01-3e2a8932f734 (network) was prepared for execution. 2025-06-22 19:38:19.322357 | orchestrator | 2025-06-22 19:38:19 | INFO  | It takes a moment until task face6c66-37ec-4f92-8e01-3e2a8932f734 (network) has been started and output is visible here. 2025-06-22 19:38:23.469001 | orchestrator | 2025-06-22 19:38:23.472184 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-22 19:38:23.474072 | orchestrator | 2025-06-22 19:38:23.475064 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-22 19:38:23.475966 | orchestrator | Sunday 22 June 2025 19:38:23 +0000 (0:00:00.274) 0:00:00.274 *********** 2025-06-22 19:38:23.646494 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:23.724236 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:23.802885 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:23.880529 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:24.054824 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:24.201500 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:24.201667 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:24.211468 | orchestrator | 2025-06-22 19:38:24.211537 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-22 19:38:24.211606 | orchestrator | Sunday 22 June 2025 19:38:24 +0000 (0:00:00.730) 0:00:01.005 *********** 2025-06-22 19:38:25.390309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:38:25.391358 | orchestrator | 2025-06-22 19:38:25.394281 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-22 19:38:25.394329 | orchestrator | Sunday 22 June 2025 19:38:25 +0000 (0:00:01.190) 0:00:02.195 *********** 2025-06-22 19:38:27.077733 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:27.078061 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:27.079499 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:27.083279 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:27.083316 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:27.083323 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:27.083331 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:27.083598 | orchestrator | 2025-06-22 19:38:27.084687 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-22 19:38:27.084929 | orchestrator | Sunday 22 June 2025 19:38:27 +0000 (0:00:01.688) 0:00:03.884 *********** 2025-06-22 19:38:28.769105 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:28.770808 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:28.772172 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:28.773274 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:28.774211 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:28.775602 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:28.776708 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:28.780581 | orchestrator | 2025-06-22 19:38:28.781036 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-22 19:38:28.782044 | orchestrator | Sunday 22 June 2025 19:38:28 +0000 (0:00:01.689) 0:00:05.573 *********** 2025-06-22 19:38:29.275491 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-22 19:38:29.275628 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-22 19:38:29.704848 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-22 19:38:29.705406 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-22 19:38:29.706593 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-22 19:38:29.707718 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-22 19:38:29.708407 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-22 19:38:29.709307 | orchestrator | 2025-06-22 19:38:29.710209 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-22 19:38:29.710820 | orchestrator | Sunday 22 June 2025 19:38:29 +0000 (0:00:00.940) 0:00:06.513 *********** 2025-06-22 19:38:33.007882 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 19:38:33.008062 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 19:38:33.008150 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:38:33.009379 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:38:33.009607 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 19:38:33.009937 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 19:38:33.014298 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 19:38:33.014854 | orchestrator | 2025-06-22 19:38:33.015819 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-22 19:38:33.016656 | orchestrator | Sunday 22 June 2025 19:38:32 +0000 (0:00:03.300) 0:00:09.813 *********** 2025-06-22 19:38:34.360906 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:34.360998 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:34.361579 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:34.362629 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:34.363918 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:34.364172 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:34.364885 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:34.365650 | orchestrator | 2025-06-22 19:38:34.366295 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-22 19:38:34.366603 | orchestrator | Sunday 22 June 2025 19:38:34 +0000 (0:00:01.353) 0:00:11.167 *********** 2025-06-22 19:38:36.017075 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:38:36.017545 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:38:36.019036 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 19:38:36.019803 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 19:38:36.020621 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 19:38:36.021176 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 19:38:36.022210 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 19:38:36.023096 | orchestrator | 2025-06-22 19:38:36.023740 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-22 19:38:36.024592 | orchestrator | Sunday 22 June 2025 19:38:36 +0000 (0:00:01.658) 0:00:12.825 *********** 2025-06-22 19:38:36.397052 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:36.589224 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:36.975976 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:36.976971 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:36.978171 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:36.979843 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:36.980221 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:36.981219 | orchestrator | 2025-06-22 19:38:36.981869 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-22 19:38:36.982804 | orchestrator | Sunday 22 June 2025 19:38:36 +0000 (0:00:00.957) 0:00:13.783 *********** 2025-06-22 19:38:37.121670 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:37.194424 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:37.266860 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:37.336988 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:37.409044 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:37.538754 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:37.541898 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:37.541939 | orchestrator | 2025-06-22 19:38:37.541953 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-22 19:38:37.542831 | orchestrator | Sunday 22 June 2025 19:38:37 +0000 (0:00:00.565) 0:00:14.348 *********** 2025-06-22 19:38:39.328035 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:39.328457 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:39.329607 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:39.332345 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:39.333114 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:39.334902 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:39.335610 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:39.336952 | orchestrator | 2025-06-22 19:38:39.337501 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-22 19:38:39.338129 | orchestrator | Sunday 22 June 2025 19:38:39 +0000 (0:00:01.783) 0:00:16.131 *********** 2025-06-22 19:38:39.589190 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:39.670267 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:39.752495 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:39.833662 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:40.252487 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:40.252689 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:40.254423 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-22 19:38:40.254739 | orchestrator | 2025-06-22 19:38:40.256895 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-22 19:38:40.256916 | orchestrator | Sunday 22 June 2025 19:38:40 +0000 (0:00:00.924) 0:00:17.056 *********** 2025-06-22 19:38:42.234108 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:42.234293 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:42.235103 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:42.235716 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:42.236485 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:42.242207 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:42.242238 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:42.242250 | orchestrator | 2025-06-22 19:38:42.242262 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-22 19:38:42.242275 | orchestrator | Sunday 22 June 2025 19:38:42 +0000 (0:00:01.976) 0:00:19.032 *********** 2025-06-22 19:38:43.537372 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:38:43.537545 | orchestrator | 2025-06-22 19:38:43.538838 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-22 19:38:43.539802 | orchestrator | Sunday 22 June 2025 19:38:43 +0000 (0:00:01.307) 0:00:20.339 *********** 2025-06-22 19:38:44.647527 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:44.648395 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:44.649980 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:44.651120 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:44.651422 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:44.652741 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:44.654957 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:44.656148 | orchestrator | 2025-06-22 19:38:44.656765 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-22 19:38:44.657848 | orchestrator | Sunday 22 June 2025 19:38:44 +0000 (0:00:01.111) 0:00:21.451 *********** 2025-06-22 19:38:44.814891 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:44.898593 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:44.981537 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:45.066472 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:45.144426 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:45.285811 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:45.287237 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:45.289245 | orchestrator | 2025-06-22 19:38:45.289270 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-22 19:38:45.289872 | orchestrator | Sunday 22 June 2025 19:38:45 +0000 (0:00:00.638) 0:00:22.089 *********** 2025-06-22 19:38:45.698350 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:38:45.698816 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:38:46.014517 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:38:46.016101 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:38:46.022802 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:38:46.022881 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:38:46.022944 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:38:46.023632 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:38:46.024502 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:38:46.024896 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:38:46.502176 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:38:46.503283 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:38:46.504093 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:38:46.506105 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:38:46.506164 | orchestrator | 2025-06-22 19:38:46.507136 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-22 19:38:46.508334 | orchestrator | Sunday 22 June 2025 19:38:46 +0000 (0:00:01.217) 0:00:23.306 *********** 2025-06-22 19:38:46.658686 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:46.740372 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:46.821053 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:46.900154 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:46.975776 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:47.097350 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:47.097478 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:47.098832 | orchestrator | 2025-06-22 19:38:47.102428 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-22 19:38:47.103230 | orchestrator | Sunday 22 June 2025 19:38:47 +0000 (0:00:00.598) 0:00:23.905 *********** 2025-06-22 19:38:51.630157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-3, testbed-node-2, testbed-node-1, testbed-node-4, testbed-node-5 2025-06-22 19:38:51.630345 | orchestrator | 2025-06-22 19:38:51.634270 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-22 19:38:51.634301 | orchestrator | Sunday 22 June 2025 19:38:51 +0000 (0:00:04.528) 0:00:28.434 *********** 2025-06-22 19:38:56.744676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:38:56.744832 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:38:56.746884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:38:56.748112 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:38:56.750228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:38:56.750846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:38:56.752076 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:38:56.753192 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:38:56.754052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:38:56.754449 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:38:56.755481 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:38:56.756313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:38:56.757044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:38:56.757679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:38:56.758349 | orchestrator | 2025-06-22 19:38:56.759023 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-22 19:38:56.759766 | orchestrator | Sunday 22 June 2025 19:38:56 +0000 (0:00:05.114) 0:00:33.548 *********** 2025-06-22 19:39:01.536213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:39:01.539316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:39:01.539674 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:39:01.540137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:39:01.540774 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:39:01.541495 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:39:01.542321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:39:01.543006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:39:01.543137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:39:01.545329 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:39:01.545737 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:39:01.546144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:39:01.546613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:39:01.546929 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:39:01.547462 | orchestrator | 2025-06-22 19:39:01.547845 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-22 19:39:01.548274 | orchestrator | Sunday 22 June 2025 19:39:01 +0000 (0:00:04.796) 0:00:38.345 *********** 2025-06-22 19:39:02.584629 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:39:02.584809 | orchestrator | 2025-06-22 19:39:02.587361 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-22 19:39:02.590185 | orchestrator | Sunday 22 June 2025 19:39:02 +0000 (0:00:01.046) 0:00:39.391 *********** 2025-06-22 19:39:02.949978 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:03.184938 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:03.554096 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:03.558908 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:03.560157 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:03.561031 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:03.563208 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:03.564948 | orchestrator | 2025-06-22 19:39:03.565395 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-22 19:39:03.566162 | orchestrator | Sunday 22 June 2025 19:39:03 +0000 (0:00:00.972) 0:00:40.363 *********** 2025-06-22 19:39:03.636929 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:39:03.637381 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:39:03.638376 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:39:03.639401 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:39:03.717994 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:03.718487 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:39:03.719890 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:39:03.722242 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:39:03.722320 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:39:03.793951 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:03.794146 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:39:03.794787 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:39:03.798102 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:39:03.798126 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:39:03.870408 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:03.870765 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:39:03.871196 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:39:03.871748 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:39:03.951970 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:39:03.952528 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:39:03.952842 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:39:03.953548 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:39:03.954221 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:39:04.147205 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:04.150237 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:39:04.150270 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:39:04.150727 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:39:04.151781 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:39:05.354599 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:05.356481 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:05.358248 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:39:05.359119 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:39:05.360697 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:39:05.361760 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:39:05.362992 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:05.363799 | orchestrator | 2025-06-22 19:39:05.364643 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-22 19:39:05.365412 | orchestrator | Sunday 22 June 2025 19:39:05 +0000 (0:00:01.795) 0:00:42.159 *********** 2025-06-22 19:39:05.518886 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:05.597303 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:05.680698 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:05.761551 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:05.845472 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:05.955159 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:05.955522 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:05.957359 | orchestrator | 2025-06-22 19:39:05.960230 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-22 19:39:05.960262 | orchestrator | Sunday 22 June 2025 19:39:05 +0000 (0:00:00.605) 0:00:42.764 *********** 2025-06-22 19:39:06.126238 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:06.206672 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:06.454740 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:06.540604 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:06.624806 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:06.673317 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:06.673822 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:06.675295 | orchestrator | 2025-06-22 19:39:06.676204 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:39:06.676535 | orchestrator | 2025-06-22 19:39:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:39:06.676776 | orchestrator | 2025-06-22 19:39:06 | INFO  | Please wait and do not abort execution. 2025-06-22 19:39:06.677881 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:39:06.678694 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:39:06.680988 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:39:06.681360 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:39:06.682141 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:39:06.682759 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:39:06.683172 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:39:06.683823 | orchestrator | 2025-06-22 19:39:06.684603 | orchestrator | 2025-06-22 19:39:06.684758 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:39:06.685538 | orchestrator | Sunday 22 June 2025 19:39:06 +0000 (0:00:00.716) 0:00:43.481 *********** 2025-06-22 19:39:06.686129 | orchestrator | =============================================================================== 2025-06-22 19:39:06.686649 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.11s 2025-06-22 19:39:06.686910 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.80s 2025-06-22 19:39:06.687376 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.53s 2025-06-22 19:39:06.687925 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.30s 2025-06-22 19:39:06.688749 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.98s 2025-06-22 19:39:06.689425 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.80s 2025-06-22 19:39:06.690222 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.78s 2025-06-22 19:39:06.691909 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.69s 2025-06-22 19:39:06.692229 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.69s 2025-06-22 19:39:06.692770 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.66s 2025-06-22 19:39:06.693627 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.35s 2025-06-22 19:39:06.693976 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2025-06-22 19:39:06.694659 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.22s 2025-06-22 19:39:06.695322 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.19s 2025-06-22 19:39:06.695775 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.11s 2025-06-22 19:39:06.696330 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.05s 2025-06-22 19:39:06.696721 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-06-22 19:39:06.697423 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 0.96s 2025-06-22 19:39:06.697634 | orchestrator | osism.commons.network : Create required directories --------------------- 0.94s 2025-06-22 19:39:06.698124 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.92s 2025-06-22 19:39:07.328083 | orchestrator | + osism apply wireguard 2025-06-22 19:39:09.043826 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:39:09.043932 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:39:09.043947 | orchestrator | Registering Redlock._release_script 2025-06-22 19:39:09.108506 | orchestrator | 2025-06-22 19:39:09 | INFO  | Task 0af79e59-5e40-42a3-8a16-4c7ec952d08a (wireguard) was prepared for execution. 2025-06-22 19:39:09.108627 | orchestrator | 2025-06-22 19:39:09 | INFO  | It takes a moment until task 0af79e59-5e40-42a3-8a16-4c7ec952d08a (wireguard) has been started and output is visible here. 2025-06-22 19:39:13.111906 | orchestrator | 2025-06-22 19:39:13.112712 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-22 19:39:13.114260 | orchestrator | 2025-06-22 19:39:13.115447 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-22 19:39:13.116884 | orchestrator | Sunday 22 June 2025 19:39:13 +0000 (0:00:00.215) 0:00:00.215 *********** 2025-06-22 19:39:14.555090 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:14.555253 | orchestrator | 2025-06-22 19:39:14.556065 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-22 19:39:14.556475 | orchestrator | Sunday 22 June 2025 19:39:14 +0000 (0:00:01.443) 0:00:01.659 *********** 2025-06-22 19:39:20.610417 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:20.610654 | orchestrator | 2025-06-22 19:39:20.612148 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-22 19:39:20.612186 | orchestrator | Sunday 22 June 2025 19:39:20 +0000 (0:00:06.056) 0:00:07.715 *********** 2025-06-22 19:39:21.169668 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:21.169775 | orchestrator | 2025-06-22 19:39:21.170500 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-22 19:39:21.172593 | orchestrator | Sunday 22 June 2025 19:39:21 +0000 (0:00:00.559) 0:00:08.275 *********** 2025-06-22 19:39:21.612101 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:21.613602 | orchestrator | 2025-06-22 19:39:21.613642 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-22 19:39:21.614113 | orchestrator | Sunday 22 June 2025 19:39:21 +0000 (0:00:00.440) 0:00:08.716 *********** 2025-06-22 19:39:22.129761 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:22.130873 | orchestrator | 2025-06-22 19:39:22.132793 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-22 19:39:22.132834 | orchestrator | Sunday 22 June 2025 19:39:22 +0000 (0:00:00.519) 0:00:09.235 *********** 2025-06-22 19:39:22.685804 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:22.685905 | orchestrator | 2025-06-22 19:39:22.687012 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-22 19:39:22.687852 | orchestrator | Sunday 22 June 2025 19:39:22 +0000 (0:00:00.551) 0:00:09.787 *********** 2025-06-22 19:39:23.089746 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:23.090449 | orchestrator | 2025-06-22 19:39:23.090631 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-22 19:39:23.090655 | orchestrator | Sunday 22 June 2025 19:39:23 +0000 (0:00:00.409) 0:00:10.196 *********** 2025-06-22 19:39:24.272918 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:24.273081 | orchestrator | 2025-06-22 19:39:24.273253 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-22 19:39:24.274093 | orchestrator | Sunday 22 June 2025 19:39:24 +0000 (0:00:01.180) 0:00:11.377 *********** 2025-06-22 19:39:25.181391 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:39:25.182694 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:25.183335 | orchestrator | 2025-06-22 19:39:25.184846 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-22 19:39:25.185962 | orchestrator | Sunday 22 June 2025 19:39:25 +0000 (0:00:00.909) 0:00:12.286 *********** 2025-06-22 19:39:26.832968 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:26.833413 | orchestrator | 2025-06-22 19:39:26.834435 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-22 19:39:26.836144 | orchestrator | Sunday 22 June 2025 19:39:26 +0000 (0:00:01.649) 0:00:13.936 *********** 2025-06-22 19:39:27.746283 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:27.746388 | orchestrator | 2025-06-22 19:39:27.746680 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:39:27.746900 | orchestrator | 2025-06-22 19:39:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:39:27.747584 | orchestrator | 2025-06-22 19:39:27 | INFO  | Please wait and do not abort execution. 2025-06-22 19:39:27.748100 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:39:27.749542 | orchestrator | 2025-06-22 19:39:27.750492 | orchestrator | 2025-06-22 19:39:27.751698 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:39:27.752636 | orchestrator | Sunday 22 June 2025 19:39:27 +0000 (0:00:00.913) 0:00:14.850 *********** 2025-06-22 19:39:27.753477 | orchestrator | =============================================================================== 2025-06-22 19:39:27.754438 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.06s 2025-06-22 19:39:27.755596 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.65s 2025-06-22 19:39:27.755891 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.44s 2025-06-22 19:39:27.757213 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2025-06-22 19:39:27.757537 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.91s 2025-06-22 19:39:27.758644 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2025-06-22 19:39:27.759195 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-06-22 19:39:27.759641 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.55s 2025-06-22 19:39:27.760183 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-06-22 19:39:27.760699 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2025-06-22 19:39:27.761556 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-06-22 19:39:28.270092 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-22 19:39:28.301332 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-22 19:39:28.301660 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-22 19:39:28.392891 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 152 0 --:--:-- --:--:-- --:--:-- 153 2025-06-22 19:39:28.406457 | orchestrator | + osism apply --environment custom workarounds 2025-06-22 19:39:30.071849 | orchestrator | 2025-06-22 19:39:30 | INFO  | Trying to run play workarounds in environment custom 2025-06-22 19:39:30.076342 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:39:30.076407 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:39:30.076420 | orchestrator | Registering Redlock._release_script 2025-06-22 19:39:30.133534 | orchestrator | 2025-06-22 19:39:30 | INFO  | Task 074c5713-7066-4753-90f1-26ecc315a62b (workarounds) was prepared for execution. 2025-06-22 19:39:30.133645 | orchestrator | 2025-06-22 19:39:30 | INFO  | It takes a moment until task 074c5713-7066-4753-90f1-26ecc315a62b (workarounds) has been started and output is visible here. 2025-06-22 19:39:34.052481 | orchestrator | 2025-06-22 19:39:34.055757 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:39:34.055794 | orchestrator | 2025-06-22 19:39:34.055808 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-22 19:39:34.056126 | orchestrator | Sunday 22 June 2025 19:39:34 +0000 (0:00:00.152) 0:00:00.152 *********** 2025-06-22 19:39:34.219132 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-22 19:39:34.302782 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-22 19:39:34.385296 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-22 19:39:34.466238 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-22 19:39:34.647228 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-22 19:39:34.808400 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-22 19:39:34.808530 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-22 19:39:34.810692 | orchestrator | 2025-06-22 19:39:34.810778 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-22 19:39:34.810794 | orchestrator | 2025-06-22 19:39:34.811251 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-22 19:39:34.811683 | orchestrator | Sunday 22 June 2025 19:39:34 +0000 (0:00:00.757) 0:00:00.909 *********** 2025-06-22 19:39:37.106284 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:37.107423 | orchestrator | 2025-06-22 19:39:37.107988 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-22 19:39:37.109138 | orchestrator | 2025-06-22 19:39:37.110120 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-22 19:39:37.110500 | orchestrator | Sunday 22 June 2025 19:39:37 +0000 (0:00:02.295) 0:00:03.205 *********** 2025-06-22 19:39:38.848101 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:38.848273 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:38.850344 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:38.852167 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:38.852212 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:38.852912 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:38.853700 | orchestrator | 2025-06-22 19:39:38.854632 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-22 19:39:38.855729 | orchestrator | 2025-06-22 19:39:38.856136 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-22 19:39:38.857322 | orchestrator | Sunday 22 June 2025 19:39:38 +0000 (0:00:01.742) 0:00:04.947 *********** 2025-06-22 19:39:40.267907 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:39:40.268272 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:39:40.270502 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:39:40.271760 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:39:40.272461 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:39:40.273849 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:39:40.274701 | orchestrator | 2025-06-22 19:39:40.276064 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-22 19:39:40.276852 | orchestrator | Sunday 22 June 2025 19:39:40 +0000 (0:00:01.418) 0:00:06.366 *********** 2025-06-22 19:39:43.794956 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:43.795119 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:43.796377 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:43.796491 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:43.796514 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:43.796806 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:43.798090 | orchestrator | 2025-06-22 19:39:43.798830 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-22 19:39:43.799206 | orchestrator | Sunday 22 June 2025 19:39:43 +0000 (0:00:03.530) 0:00:09.896 *********** 2025-06-22 19:39:43.949025 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:44.029435 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:44.109652 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:44.187064 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:44.478090 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:44.478284 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:44.479262 | orchestrator | 2025-06-22 19:39:44.482716 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-22 19:39:44.482765 | orchestrator | 2025-06-22 19:39:44.482777 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-22 19:39:44.482789 | orchestrator | Sunday 22 June 2025 19:39:44 +0000 (0:00:00.682) 0:00:10.578 *********** 2025-06-22 19:39:46.034307 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:46.034643 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:46.036106 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:46.036184 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:46.040661 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:46.040684 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:46.040695 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:46.040706 | orchestrator | 2025-06-22 19:39:46.044314 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-22 19:39:46.044770 | orchestrator | Sunday 22 June 2025 19:39:46 +0000 (0:00:01.556) 0:00:12.135 *********** 2025-06-22 19:39:47.544773 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:47.545500 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.546870 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.547487 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.548657 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.549280 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.550521 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.551036 | orchestrator | 2025-06-22 19:39:47.551866 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-22 19:39:47.552748 | orchestrator | Sunday 22 June 2025 19:39:47 +0000 (0:00:01.507) 0:00:13.642 *********** 2025-06-22 19:39:49.059629 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:49.060210 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:49.061024 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:49.061719 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:49.062606 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:49.063316 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:49.063754 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:49.064856 | orchestrator | 2025-06-22 19:39:49.066640 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-22 19:39:49.067222 | orchestrator | Sunday 22 June 2025 19:39:49 +0000 (0:00:01.518) 0:00:15.160 *********** 2025-06-22 19:39:50.862771 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:50.863747 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:50.864441 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:50.865610 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:50.866806 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:50.868395 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:50.869806 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:50.870914 | orchestrator | 2025-06-22 19:39:50.871957 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-22 19:39:50.872825 | orchestrator | Sunday 22 June 2025 19:39:50 +0000 (0:00:01.800) 0:00:16.961 *********** 2025-06-22 19:39:51.021543 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:51.101533 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:51.179675 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:51.252269 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:51.330133 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:51.454891 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:51.455739 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:51.456993 | orchestrator | 2025-06-22 19:39:51.457651 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-22 19:39:51.458926 | orchestrator | 2025-06-22 19:39:51.460225 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-22 19:39:51.460778 | orchestrator | Sunday 22 June 2025 19:39:51 +0000 (0:00:00.594) 0:00:17.555 *********** 2025-06-22 19:39:53.988001 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:53.988176 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:53.988549 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:53.992308 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:53.992334 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:53.993618 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:53.994851 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:53.995705 | orchestrator | 2025-06-22 19:39:53.996477 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:39:53.996911 | orchestrator | 2025-06-22 19:39:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:39:53.997261 | orchestrator | 2025-06-22 19:39:53 | INFO  | Please wait and do not abort execution. 2025-06-22 19:39:53.998590 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:39:53.999299 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:39:54.000314 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:39:54.001256 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:39:54.001820 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:39:54.003096 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:39:54.003688 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:39:54.005301 | orchestrator | 2025-06-22 19:39:54.005954 | orchestrator | 2025-06-22 19:39:54.007127 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:39:54.008023 | orchestrator | Sunday 22 June 2025 19:39:53 +0000 (0:00:02.532) 0:00:20.088 *********** 2025-06-22 19:39:54.009298 | orchestrator | =============================================================================== 2025-06-22 19:39:54.009702 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.53s 2025-06-22 19:39:54.012701 | orchestrator | Install python3-docker -------------------------------------------------- 2.53s 2025-06-22 19:39:54.013947 | orchestrator | Apply netplan configuration --------------------------------------------- 2.30s 2025-06-22 19:39:54.014294 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.80s 2025-06-22 19:39:54.015629 | orchestrator | Apply netplan configuration --------------------------------------------- 1.74s 2025-06-22 19:39:54.016957 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.56s 2025-06-22 19:39:54.018011 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.52s 2025-06-22 19:39:54.018643 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.51s 2025-06-22 19:39:54.019233 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.42s 2025-06-22 19:39:54.019929 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2025-06-22 19:39:54.020311 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.68s 2025-06-22 19:39:54.021240 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.59s 2025-06-22 19:39:54.620299 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-22 19:39:56.269105 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:39:56.269225 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:39:56.269251 | orchestrator | Registering Redlock._release_script 2025-06-22 19:39:56.330988 | orchestrator | 2025-06-22 19:39:56 | INFO  | Task 7dfb126f-e3c6-43c0-9e48-a0946065805b (reboot) was prepared for execution. 2025-06-22 19:39:56.331096 | orchestrator | 2025-06-22 19:39:56 | INFO  | It takes a moment until task 7dfb126f-e3c6-43c0-9e48-a0946065805b (reboot) has been started and output is visible here. 2025-06-22 19:40:00.305087 | orchestrator | 2025-06-22 19:40:00.306066 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:40:00.307912 | orchestrator | 2025-06-22 19:40:00.308536 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:40:00.309234 | orchestrator | Sunday 22 June 2025 19:40:00 +0000 (0:00:00.215) 0:00:00.215 *********** 2025-06-22 19:40:00.406513 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:00.406721 | orchestrator | 2025-06-22 19:40:00.407223 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:40:00.407405 | orchestrator | Sunday 22 June 2025 19:40:00 +0000 (0:00:00.105) 0:00:00.320 *********** 2025-06-22 19:40:01.315619 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:01.315862 | orchestrator | 2025-06-22 19:40:01.317255 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:40:01.318149 | orchestrator | Sunday 22 June 2025 19:40:01 +0000 (0:00:00.906) 0:00:01.227 *********** 2025-06-22 19:40:01.432505 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:01.432743 | orchestrator | 2025-06-22 19:40:01.433650 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:40:01.434813 | orchestrator | 2025-06-22 19:40:01.435611 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:40:01.436378 | orchestrator | Sunday 22 June 2025 19:40:01 +0000 (0:00:00.116) 0:00:01.343 *********** 2025-06-22 19:40:01.527122 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:01.527413 | orchestrator | 2025-06-22 19:40:01.528092 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:40:01.529170 | orchestrator | Sunday 22 June 2025 19:40:01 +0000 (0:00:00.096) 0:00:01.440 *********** 2025-06-22 19:40:02.146222 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:02.147291 | orchestrator | 2025-06-22 19:40:02.148261 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:40:02.149052 | orchestrator | Sunday 22 June 2025 19:40:02 +0000 (0:00:00.619) 0:00:02.060 *********** 2025-06-22 19:40:02.262454 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:02.263740 | orchestrator | 2025-06-22 19:40:02.265439 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:40:02.265947 | orchestrator | 2025-06-22 19:40:02.266749 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:40:02.267145 | orchestrator | Sunday 22 June 2025 19:40:02 +0000 (0:00:00.115) 0:00:02.175 *********** 2025-06-22 19:40:02.471432 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:02.472051 | orchestrator | 2025-06-22 19:40:02.472988 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:40:02.474178 | orchestrator | Sunday 22 June 2025 19:40:02 +0000 (0:00:00.207) 0:00:02.383 *********** 2025-06-22 19:40:03.100659 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:03.101070 | orchestrator | 2025-06-22 19:40:03.101779 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:40:03.102506 | orchestrator | Sunday 22 June 2025 19:40:03 +0000 (0:00:00.631) 0:00:03.014 *********** 2025-06-22 19:40:03.210803 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:03.212258 | orchestrator | 2025-06-22 19:40:03.213296 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:40:03.214187 | orchestrator | 2025-06-22 19:40:03.214830 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:40:03.215965 | orchestrator | Sunday 22 June 2025 19:40:03 +0000 (0:00:00.106) 0:00:03.121 *********** 2025-06-22 19:40:03.296018 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:03.296167 | orchestrator | 2025-06-22 19:40:03.296843 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:40:03.298873 | orchestrator | Sunday 22 June 2025 19:40:03 +0000 (0:00:00.087) 0:00:03.209 *********** 2025-06-22 19:40:03.914544 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:03.915673 | orchestrator | 2025-06-22 19:40:03.915768 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:40:03.916225 | orchestrator | Sunday 22 June 2025 19:40:03 +0000 (0:00:00.616) 0:00:03.826 *********** 2025-06-22 19:40:04.031149 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:04.031288 | orchestrator | 2025-06-22 19:40:04.031489 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:40:04.031599 | orchestrator | 2025-06-22 19:40:04.032099 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:40:04.032456 | orchestrator | Sunday 22 June 2025 19:40:04 +0000 (0:00:00.115) 0:00:03.941 *********** 2025-06-22 19:40:04.123831 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:04.123931 | orchestrator | 2025-06-22 19:40:04.123946 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:40:04.124037 | orchestrator | Sunday 22 June 2025 19:40:04 +0000 (0:00:00.096) 0:00:04.038 *********** 2025-06-22 19:40:04.820269 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:04.820773 | orchestrator | 2025-06-22 19:40:04.820840 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:40:04.820997 | orchestrator | Sunday 22 June 2025 19:40:04 +0000 (0:00:00.694) 0:00:04.733 *********** 2025-06-22 19:40:04.931385 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:04.932664 | orchestrator | 2025-06-22 19:40:04.933381 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:40:04.935152 | orchestrator | 2025-06-22 19:40:04.935203 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:40:04.935216 | orchestrator | Sunday 22 June 2025 19:40:04 +0000 (0:00:00.109) 0:00:04.842 *********** 2025-06-22 19:40:05.040345 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:05.040979 | orchestrator | 2025-06-22 19:40:05.041236 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:40:05.041625 | orchestrator | Sunday 22 June 2025 19:40:05 +0000 (0:00:00.111) 0:00:04.954 *********** 2025-06-22 19:40:05.702151 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:05.703454 | orchestrator | 2025-06-22 19:40:05.703946 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:40:05.704445 | orchestrator | Sunday 22 June 2025 19:40:05 +0000 (0:00:00.659) 0:00:05.613 *********** 2025-06-22 19:40:05.732506 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:05.734825 | orchestrator | 2025-06-22 19:40:05.734863 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:40:05.735321 | orchestrator | 2025-06-22 19:40:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:40:05.735366 | orchestrator | 2025-06-22 19:40:05 | INFO  | Please wait and do not abort execution. 2025-06-22 19:40:05.736693 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:40:05.737514 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:40:05.738155 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:40:05.738933 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:40:05.739912 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:40:05.740804 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:40:05.742134 | orchestrator | 2025-06-22 19:40:05.742985 | orchestrator | 2025-06-22 19:40:05.744158 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:40:05.744744 | orchestrator | Sunday 22 June 2025 19:40:05 +0000 (0:00:00.034) 0:00:05.647 *********** 2025-06-22 19:40:05.745398 | orchestrator | =============================================================================== 2025-06-22 19:40:05.746101 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.13s 2025-06-22 19:40:05.746901 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.71s 2025-06-22 19:40:05.747737 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.60s 2025-06-22 19:40:06.292866 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-22 19:40:07.935753 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:40:07.935856 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:40:07.935871 | orchestrator | Registering Redlock._release_script 2025-06-22 19:40:07.991843 | orchestrator | 2025-06-22 19:40:07 | INFO  | Task be79c050-4d3a-4663-8f7c-193ab11291bb (wait-for-connection) was prepared for execution. 2025-06-22 19:40:07.991936 | orchestrator | 2025-06-22 19:40:07 | INFO  | It takes a moment until task be79c050-4d3a-4663-8f7c-193ab11291bb (wait-for-connection) has been started and output is visible here. 2025-06-22 19:40:12.017398 | orchestrator | 2025-06-22 19:40:12.017509 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-22 19:40:12.020030 | orchestrator | 2025-06-22 19:40:12.020061 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-22 19:40:12.020074 | orchestrator | Sunday 22 June 2025 19:40:12 +0000 (0:00:00.256) 0:00:00.256 *********** 2025-06-22 19:40:23.480863 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:23.481014 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:23.481234 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:23.482311 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:23.484800 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:23.485795 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:23.486704 | orchestrator | 2025-06-22 19:40:23.487389 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:40:23.488002 | orchestrator | 2025-06-22 19:40:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:40:23.488210 | orchestrator | 2025-06-22 19:40:23 | INFO  | Please wait and do not abort execution. 2025-06-22 19:40:23.489715 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:40:23.490607 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:40:23.491423 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:40:23.492158 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:40:23.493102 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:40:23.493634 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:40:23.494431 | orchestrator | 2025-06-22 19:40:23.495632 | orchestrator | 2025-06-22 19:40:23.495848 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:40:23.496776 | orchestrator | Sunday 22 June 2025 19:40:23 +0000 (0:00:11.464) 0:00:11.721 *********** 2025-06-22 19:40:23.497543 | orchestrator | =============================================================================== 2025-06-22 19:40:23.498084 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.47s 2025-06-22 19:40:24.045976 | orchestrator | + osism apply hddtemp 2025-06-22 19:40:25.696244 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:40:25.696344 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:40:25.696359 | orchestrator | Registering Redlock._release_script 2025-06-22 19:40:25.753789 | orchestrator | 2025-06-22 19:40:25 | INFO  | Task b0cf3f8d-0f3e-426c-81a1-6c60080a644e (hddtemp) was prepared for execution. 2025-06-22 19:40:25.754481 | orchestrator | 2025-06-22 19:40:25 | INFO  | It takes a moment until task b0cf3f8d-0f3e-426c-81a1-6c60080a644e (hddtemp) has been started and output is visible here. 2025-06-22 19:40:29.729782 | orchestrator | 2025-06-22 19:40:29.733875 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-22 19:40:29.734807 | orchestrator | 2025-06-22 19:40:29.735539 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-22 19:40:29.736230 | orchestrator | Sunday 22 June 2025 19:40:29 +0000 (0:00:00.261) 0:00:00.261 *********** 2025-06-22 19:40:29.861150 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:29.930236 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:30.002493 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:30.058902 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:30.174100 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:30.281157 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:30.282139 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:30.282895 | orchestrator | 2025-06-22 19:40:30.283811 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-22 19:40:30.284463 | orchestrator | Sunday 22 June 2025 19:40:30 +0000 (0:00:00.551) 0:00:00.813 *********** 2025-06-22 19:40:31.270311 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:40:31.274006 | orchestrator | 2025-06-22 19:40:31.274083 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-22 19:40:31.274097 | orchestrator | Sunday 22 June 2025 19:40:31 +0000 (0:00:00.988) 0:00:01.802 *********** 2025-06-22 19:40:33.064642 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:33.064839 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:33.067344 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:33.068147 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:33.070134 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:33.070608 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:33.071562 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:33.072513 | orchestrator | 2025-06-22 19:40:33.073051 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-22 19:40:33.074339 | orchestrator | Sunday 22 June 2025 19:40:33 +0000 (0:00:01.794) 0:00:03.596 *********** 2025-06-22 19:40:33.674478 | orchestrator | changed: [testbed-manager] 2025-06-22 19:40:33.762103 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:34.187933 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:34.188147 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:34.189300 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:34.190362 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:34.190995 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:34.192260 | orchestrator | 2025-06-22 19:40:34.193089 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-22 19:40:34.195015 | orchestrator | Sunday 22 June 2025 19:40:34 +0000 (0:00:01.119) 0:00:04.716 *********** 2025-06-22 19:40:35.303915 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:35.304862 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:35.305790 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:35.306877 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:35.307960 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:35.308518 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:35.309170 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:35.309860 | orchestrator | 2025-06-22 19:40:35.310605 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-22 19:40:35.312597 | orchestrator | Sunday 22 June 2025 19:40:35 +0000 (0:00:01.119) 0:00:05.835 *********** 2025-06-22 19:40:35.740482 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:35.824745 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:35.899819 | orchestrator | changed: [testbed-manager] 2025-06-22 19:40:35.983665 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:36.107290 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:36.108129 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:36.109120 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:36.109703 | orchestrator | 2025-06-22 19:40:36.110554 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-22 19:40:36.113069 | orchestrator | Sunday 22 June 2025 19:40:36 +0000 (0:00:00.802) 0:00:06.637 *********** 2025-06-22 19:40:46.981305 | orchestrator | changed: [testbed-manager] 2025-06-22 19:40:46.981420 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:46.982182 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:46.983678 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:46.984541 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:46.985328 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:46.985991 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:46.986669 | orchestrator | 2025-06-22 19:40:46.986991 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-22 19:40:46.987701 | orchestrator | Sunday 22 June 2025 19:40:46 +0000 (0:00:10.872) 0:00:17.510 *********** 2025-06-22 19:40:48.360663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:40:48.360980 | orchestrator | 2025-06-22 19:40:48.361737 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-22 19:40:48.363345 | orchestrator | Sunday 22 June 2025 19:40:48 +0000 (0:00:01.380) 0:00:18.890 *********** 2025-06-22 19:40:50.194839 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:50.194936 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:50.194949 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:50.195666 | orchestrator | changed: [testbed-manager] 2025-06-22 19:40:50.197204 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:50.197974 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:50.198388 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:50.199697 | orchestrator | 2025-06-22 19:40:50.201121 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:40:50.201194 | orchestrator | 2025-06-22 19:40:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:40:50.201210 | orchestrator | 2025-06-22 19:40:50 | INFO  | Please wait and do not abort execution. 2025-06-22 19:40:50.201841 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:40:50.202003 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:40:50.202642 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:40:50.203631 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:40:50.204181 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:40:50.204725 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:40:50.205484 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:40:50.205784 | orchestrator | 2025-06-22 19:40:50.206722 | orchestrator | 2025-06-22 19:40:50.206910 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:40:50.207385 | orchestrator | Sunday 22 June 2025 19:40:50 +0000 (0:00:01.835) 0:00:20.726 *********** 2025-06-22 19:40:50.207915 | orchestrator | =============================================================================== 2025-06-22 19:40:50.208398 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 10.87s 2025-06-22 19:40:50.208906 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2025-06-22 19:40:50.209202 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.79s 2025-06-22 19:40:50.209692 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.38s 2025-06-22 19:40:50.210428 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.12s 2025-06-22 19:40:50.210707 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.12s 2025-06-22 19:40:50.211280 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.99s 2025-06-22 19:40:50.211869 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.80s 2025-06-22 19:40:50.212240 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.55s 2025-06-22 19:40:50.827276 | orchestrator | ++ semver 9.1.0 7.1.1 2025-06-22 19:40:50.885301 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-22 19:40:50.885399 | orchestrator | + sudo systemctl restart manager.service 2025-06-22 19:41:05.517954 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-22 19:41:05.518144 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-22 19:41:05.518163 | orchestrator | + local max_attempts=60 2025-06-22 19:41:05.518175 | orchestrator | + local name=ceph-ansible 2025-06-22 19:41:05.518198 | orchestrator | + local attempt_num=1 2025-06-22 19:41:05.518210 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:41:05.550675 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:41:05.550780 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:41:05.550796 | orchestrator | + sleep 5 2025-06-22 19:41:10.559391 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:41:10.590106 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:41:10.590190 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:41:10.590203 | orchestrator | + sleep 5 2025-06-22 19:41:15.593093 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:41:15.625364 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:41:15.625412 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:41:15.625418 | orchestrator | + sleep 5 2025-06-22 19:41:20.629185 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:41:20.662671 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:41:20.662733 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:41:20.662747 | orchestrator | + sleep 5 2025-06-22 19:41:25.667609 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:41:25.701396 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:41:25.701484 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:41:25.701499 | orchestrator | + sleep 5 2025-06-22 19:41:30.706587 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:41:30.745486 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:41:30.745615 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:41:30.745630 | orchestrator | + sleep 5 2025-06-22 19:41:35.750143 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:41:35.787452 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:41:35.787596 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:41:35.787621 | orchestrator | + sleep 5 2025-06-22 19:41:40.794335 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:41:40.825136 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:41:40.825225 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:41:40.825237 | orchestrator | + sleep 5 2025-06-22 19:41:45.826950 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:41:45.851360 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:41:45.851470 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:41:45.851488 | orchestrator | + sleep 5 2025-06-22 19:41:50.854223 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:41:50.891764 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:41:50.891851 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:41:50.891870 | orchestrator | + sleep 5 2025-06-22 19:41:55.895181 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:41:55.927915 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:41:55.928001 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:41:55.928016 | orchestrator | + sleep 5 2025-06-22 19:42:00.931262 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:42:00.964898 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:42:00.964960 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:42:00.964973 | orchestrator | + sleep 5 2025-06-22 19:42:05.969731 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:42:06.002641 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:42:06.002741 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:42:06.002762 | orchestrator | + sleep 5 2025-06-22 19:42:11.006880 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:42:11.044446 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:42:11.044564 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-22 19:42:11.044610 | orchestrator | + local max_attempts=60 2025-06-22 19:42:11.044692 | orchestrator | + local name=kolla-ansible 2025-06-22 19:42:11.044707 | orchestrator | + local attempt_num=1 2025-06-22 19:42:11.045542 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-22 19:42:11.086188 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:42:11.086309 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-22 19:42:11.086335 | orchestrator | + local max_attempts=60 2025-06-22 19:42:11.086354 | orchestrator | + local name=osism-ansible 2025-06-22 19:42:11.086368 | orchestrator | + local attempt_num=1 2025-06-22 19:42:11.086450 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-22 19:42:11.123182 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:42:11.123237 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-22 19:42:11.123248 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-22 19:42:11.272319 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-22 19:42:11.418986 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-22 19:42:11.571590 | orchestrator | ARA in osism-ansible already disabled. 2025-06-22 19:42:11.719922 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-22 19:42:11.720083 | orchestrator | + osism apply gather-facts 2025-06-22 19:42:13.444085 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:42:13.444187 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:42:13.444202 | orchestrator | Registering Redlock._release_script 2025-06-22 19:42:13.502357 | orchestrator | 2025-06-22 19:42:13 | INFO  | Task 2d4047c1-5463-4d44-9e32-14e931ab9593 (gather-facts) was prepared for execution. 2025-06-22 19:42:13.502449 | orchestrator | 2025-06-22 19:42:13 | INFO  | It takes a moment until task 2d4047c1-5463-4d44-9e32-14e931ab9593 (gather-facts) has been started and output is visible here. 2025-06-22 19:42:17.007986 | orchestrator | 2025-06-22 19:42:17.008322 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:42:17.009515 | orchestrator | 2025-06-22 19:42:17.011991 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:42:17.013193 | orchestrator | Sunday 22 June 2025 19:42:16 +0000 (0:00:00.163) 0:00:00.163 *********** 2025-06-22 19:42:22.171712 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:42:22.172994 | orchestrator | ok: [testbed-manager] 2025-06-22 19:42:22.174062 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:42:22.174914 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:42:22.175748 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:42:22.176864 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:42:22.179489 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:42:22.179942 | orchestrator | 2025-06-22 19:42:22.180718 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 19:42:22.181232 | orchestrator | 2025-06-22 19:42:22.181858 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 19:42:22.182336 | orchestrator | Sunday 22 June 2025 19:42:22 +0000 (0:00:05.164) 0:00:05.327 *********** 2025-06-22 19:42:22.300409 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:42:22.382365 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:42:22.453314 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:42:22.526749 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:42:22.599948 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:42:22.636374 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:42:22.636448 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:42:22.638249 | orchestrator | 2025-06-22 19:42:22.638369 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:42:22.638642 | orchestrator | 2025-06-22 19:42:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:42:22.638729 | orchestrator | 2025-06-22 19:42:22 | INFO  | Please wait and do not abort execution. 2025-06-22 19:42:22.639964 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:22.640253 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:22.640708 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:22.641147 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:22.641525 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:22.641951 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:22.642432 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:22.642824 | orchestrator | 2025-06-22 19:42:22.643165 | orchestrator | 2025-06-22 19:42:22.643587 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:42:22.643973 | orchestrator | Sunday 22 June 2025 19:42:22 +0000 (0:00:00.469) 0:00:05.797 *********** 2025-06-22 19:42:22.644317 | orchestrator | =============================================================================== 2025-06-22 19:42:22.647043 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.16s 2025-06-22 19:42:22.647093 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2025-06-22 19:42:23.040694 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-22 19:42:23.052442 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-22 19:42:23.069519 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-22 19:42:23.088315 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-22 19:42:23.101663 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-22 19:42:23.110745 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-22 19:42:23.119923 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-22 19:42:23.129379 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-22 19:42:23.139198 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-22 19:42:23.148703 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-22 19:42:23.158457 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-22 19:42:23.170943 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-22 19:42:23.181614 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-22 19:42:23.191202 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-22 19:42:23.200110 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-22 19:42:23.209165 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-22 19:42:23.218663 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-22 19:42:23.227747 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-22 19:42:23.244160 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-22 19:42:23.253255 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-22 19:42:23.266628 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-22 19:42:23.596372 | orchestrator | ok: Runtime: 0:19:20.820771 2025-06-22 19:42:23.697681 | 2025-06-22 19:42:23.697814 | TASK [Deploy services] 2025-06-22 19:42:24.232751 | orchestrator | skipping: Conditional result was False 2025-06-22 19:42:24.251781 | 2025-06-22 19:42:24.251955 | TASK [Deploy in a nutshell] 2025-06-22 19:42:24.937183 | orchestrator | + set -e 2025-06-22 19:42:24.937507 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 19:42:24.937549 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 19:42:24.937582 | orchestrator | ++ INTERACTIVE=false 2025-06-22 19:42:24.937603 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 19:42:24.937624 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 19:42:24.937663 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 19:42:24.937747 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 19:42:24.937783 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 19:42:24.937798 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 19:42:24.937813 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 19:42:24.937826 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 19:42:24.937844 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 19:42:24.937855 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 19:42:24.937888 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 19:42:24.937900 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 19:42:24.937914 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 19:42:24.937925 | orchestrator | ++ export ARA=false 2025-06-22 19:42:24.937936 | orchestrator | ++ ARA=false 2025-06-22 19:42:24.937947 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 19:42:24.937959 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 19:42:24.937970 | orchestrator | ++ export TEMPEST=false 2025-06-22 19:42:24.937981 | orchestrator | ++ TEMPEST=false 2025-06-22 19:42:24.937992 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 19:42:24.938003 | orchestrator | ++ IS_ZUUL=true 2025-06-22 19:42:24.938057 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.98 2025-06-22 19:42:24.938071 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.98 2025-06-22 19:42:24.938082 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 19:42:24.938093 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 19:42:24.938103 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 19:42:24.938114 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 19:42:24.938125 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 19:42:24.938135 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 19:42:24.938147 | orchestrator | 2025-06-22 19:42:24.938158 | orchestrator | # PULL IMAGES 2025-06-22 19:42:24.938169 | orchestrator | 2025-06-22 19:42:24.938180 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 19:42:24.938199 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 19:42:24.938210 | orchestrator | + echo 2025-06-22 19:42:24.938222 | orchestrator | + echo '# PULL IMAGES' 2025-06-22 19:42:24.938233 | orchestrator | + echo 2025-06-22 19:42:24.938460 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-22 19:42:24.989477 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-22 19:42:24.989548 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-22 19:42:26.466723 | orchestrator | 2025-06-22 19:42:26 | INFO  | Trying to run play pull-images in environment custom 2025-06-22 19:42:26.471384 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:42:26.471459 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:42:26.471474 | orchestrator | Registering Redlock._release_script 2025-06-22 19:42:26.527112 | orchestrator | 2025-06-22 19:42:26 | INFO  | Task 62ff19bd-3eff-4662-808c-8053fe5758fc (pull-images) was prepared for execution. 2025-06-22 19:42:26.527215 | orchestrator | 2025-06-22 19:42:26 | INFO  | It takes a moment until task 62ff19bd-3eff-4662-808c-8053fe5758fc (pull-images) has been started and output is visible here. 2025-06-22 19:42:30.253948 | orchestrator | 2025-06-22 19:42:30.254100 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-22 19:42:30.254121 | orchestrator | 2025-06-22 19:42:30.254679 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-22 19:42:30.255294 | orchestrator | Sunday 22 June 2025 19:42:30 +0000 (0:00:00.115) 0:00:00.115 *********** 2025-06-22 19:43:34.035569 | orchestrator | changed: [testbed-manager] 2025-06-22 19:43:34.035712 | orchestrator | 2025-06-22 19:43:34.035734 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-22 19:43:34.035963 | orchestrator | Sunday 22 June 2025 19:43:34 +0000 (0:01:03.785) 0:01:03.900 *********** 2025-06-22 19:44:25.980895 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-22 19:44:25.981069 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-22 19:44:25.981097 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-22 19:44:25.981144 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-22 19:44:25.981888 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-22 19:44:25.982537 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-22 19:44:25.982835 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-22 19:44:25.983277 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-22 19:44:25.984460 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-22 19:44:25.984598 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-22 19:44:25.984614 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-22 19:44:25.985440 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-22 19:44:25.985458 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-22 19:44:25.985600 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-22 19:44:25.986494 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-22 19:44:25.986779 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-22 19:44:25.987486 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-22 19:44:25.987516 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-22 19:44:25.988424 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-22 19:44:25.988514 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-22 19:44:25.989450 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-22 19:44:25.989552 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-22 19:44:25.989583 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-22 19:44:25.990484 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-22 19:44:25.990634 | orchestrator | 2025-06-22 19:44:25.990842 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:44:25.991132 | orchestrator | 2025-06-22 19:44:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:44:25.991362 | orchestrator | 2025-06-22 19:44:25 | INFO  | Please wait and do not abort execution. 2025-06-22 19:44:25.991894 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:44:25.992188 | orchestrator | 2025-06-22 19:44:25.992610 | orchestrator | 2025-06-22 19:44:25.992833 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:44:25.993256 | orchestrator | Sunday 22 June 2025 19:44:25 +0000 (0:00:51.943) 0:01:55.843 *********** 2025-06-22 19:44:25.993504 | orchestrator | =============================================================================== 2025-06-22 19:44:25.993834 | orchestrator | Pull keystone image ---------------------------------------------------- 63.79s 2025-06-22 19:44:25.994197 | orchestrator | Pull other images ------------------------------------------------------ 51.94s 2025-06-22 19:44:28.238451 | orchestrator | 2025-06-22 19:44:28 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-22 19:44:28.242616 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:44:28.242665 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:44:28.242678 | orchestrator | Registering Redlock._release_script 2025-06-22 19:44:28.296348 | orchestrator | 2025-06-22 19:44:28 | INFO  | Task 9bc9b64b-108c-413f-bc67-9dc57734041b (wipe-partitions) was prepared for execution. 2025-06-22 19:44:28.296418 | orchestrator | 2025-06-22 19:44:28 | INFO  | It takes a moment until task 9bc9b64b-108c-413f-bc67-9dc57734041b (wipe-partitions) has been started and output is visible here. 2025-06-22 19:44:31.398274 | orchestrator | 2025-06-22 19:44:31.398416 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-22 19:44:31.398543 | orchestrator | 2025-06-22 19:44:31.398576 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-22 19:44:31.401200 | orchestrator | Sunday 22 June 2025 19:44:31 +0000 (0:00:00.098) 0:00:00.098 *********** 2025-06-22 19:44:31.890522 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:44:31.890633 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:44:31.890647 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:44:31.890659 | orchestrator | 2025-06-22 19:44:31.890671 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-22 19:44:31.890683 | orchestrator | Sunday 22 June 2025 19:44:31 +0000 (0:00:00.488) 0:00:00.586 *********** 2025-06-22 19:44:31.997589 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:44:32.065574 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:44:32.065801 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:44:32.065821 | orchestrator | 2025-06-22 19:44:32.066120 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-22 19:44:32.066349 | orchestrator | Sunday 22 June 2025 19:44:32 +0000 (0:00:00.179) 0:00:00.766 *********** 2025-06-22 19:44:32.667570 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:44:32.667762 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:44:32.668348 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:44:32.668846 | orchestrator | 2025-06-22 19:44:32.672487 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-22 19:44:32.673898 | orchestrator | Sunday 22 June 2025 19:44:32 +0000 (0:00:00.601) 0:00:01.367 *********** 2025-06-22 19:44:32.808567 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:44:32.898184 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:44:32.899629 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:44:32.903610 | orchestrator | 2025-06-22 19:44:32.907260 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-22 19:44:32.907326 | orchestrator | Sunday 22 June 2025 19:44:32 +0000 (0:00:00.230) 0:00:01.598 *********** 2025-06-22 19:44:33.948052 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-22 19:44:33.949310 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-22 19:44:33.949342 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-22 19:44:33.949357 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-22 19:44:33.950103 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-22 19:44:33.950388 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-22 19:44:33.950865 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-22 19:44:33.953297 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-22 19:44:33.953336 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-22 19:44:33.953894 | orchestrator | 2025-06-22 19:44:33.954654 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-22 19:44:33.954783 | orchestrator | Sunday 22 June 2025 19:44:33 +0000 (0:00:01.047) 0:00:02.646 *********** 2025-06-22 19:44:35.130223 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-22 19:44:35.130992 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-22 19:44:35.131029 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-22 19:44:35.131446 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-22 19:44:35.131963 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-22 19:44:35.132528 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-22 19:44:35.133550 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-22 19:44:35.135509 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-22 19:44:35.135623 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-22 19:44:35.135951 | orchestrator | 2025-06-22 19:44:35.136260 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-22 19:44:35.136635 | orchestrator | Sunday 22 June 2025 19:44:35 +0000 (0:00:01.181) 0:00:03.827 *********** 2025-06-22 19:44:37.127916 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-22 19:44:37.129967 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-22 19:44:37.130970 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-22 19:44:37.132362 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-22 19:44:37.133153 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-22 19:44:37.133524 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-22 19:44:37.133954 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-22 19:44:37.135741 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-22 19:44:37.136128 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-22 19:44:37.136432 | orchestrator | 2025-06-22 19:44:37.136639 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-22 19:44:37.137028 | orchestrator | Sunday 22 June 2025 19:44:37 +0000 (0:00:01.997) 0:00:05.824 *********** 2025-06-22 19:44:37.711494 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:44:37.712129 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:44:37.712399 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:44:37.714328 | orchestrator | 2025-06-22 19:44:37.715865 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-22 19:44:37.715909 | orchestrator | Sunday 22 June 2025 19:44:37 +0000 (0:00:00.587) 0:00:06.411 *********** 2025-06-22 19:44:38.226694 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:44:38.228568 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:44:38.229010 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:44:38.229808 | orchestrator | 2025-06-22 19:44:38.230291 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:44:38.231505 | orchestrator | 2025-06-22 19:44:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:44:38.231551 | orchestrator | 2025-06-22 19:44:38 | INFO  | Please wait and do not abort execution. 2025-06-22 19:44:38.232023 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:44:38.232059 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:44:38.232338 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:44:38.232760 | orchestrator | 2025-06-22 19:44:38.233519 | orchestrator | 2025-06-22 19:44:38.235067 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:44:38.235407 | orchestrator | Sunday 22 June 2025 19:44:38 +0000 (0:00:00.511) 0:00:06.922 *********** 2025-06-22 19:44:38.236555 | orchestrator | =============================================================================== 2025-06-22 19:44:38.236590 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.00s 2025-06-22 19:44:38.236610 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.18s 2025-06-22 19:44:38.237045 | orchestrator | Check device availability ----------------------------------------------- 1.05s 2025-06-22 19:44:38.237519 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2025-06-22 19:44:38.237797 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-06-22 19:44:38.238355 | orchestrator | Request device events from the kernel ----------------------------------- 0.51s 2025-06-22 19:44:38.238899 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.49s 2025-06-22 19:44:38.240396 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-06-22 19:44:38.240649 | orchestrator | Remove all rook related logical devices --------------------------------- 0.18s 2025-06-22 19:44:40.074064 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:44:40.074152 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:44:40.074167 | orchestrator | Registering Redlock._release_script 2025-06-22 19:44:40.125078 | orchestrator | 2025-06-22 19:44:40 | INFO  | Task 84226bce-2cf4-4c0b-beb5-a39ff6f91b78 (facts) was prepared for execution. 2025-06-22 19:44:40.125166 | orchestrator | 2025-06-22 19:44:40 | INFO  | It takes a moment until task 84226bce-2cf4-4c0b-beb5-a39ff6f91b78 (facts) has been started and output is visible here. 2025-06-22 19:44:44.207704 | orchestrator | 2025-06-22 19:44:44.207860 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-22 19:44:44.207877 | orchestrator | 2025-06-22 19:44:44.207889 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 19:44:44.207901 | orchestrator | Sunday 22 June 2025 19:44:44 +0000 (0:00:00.322) 0:00:00.322 *********** 2025-06-22 19:44:45.331571 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:44:45.333254 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:44:45.333384 | orchestrator | ok: [testbed-manager] 2025-06-22 19:44:45.333483 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:44:45.333871 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:44:45.335059 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:44:45.335795 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:44:45.335829 | orchestrator | 2025-06-22 19:44:45.338593 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 19:44:45.340630 | orchestrator | Sunday 22 June 2025 19:44:45 +0000 (0:00:01.143) 0:00:01.465 *********** 2025-06-22 19:44:45.529661 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:44:45.614338 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:44:45.688445 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:44:45.766418 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:44:45.903284 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:44:46.761813 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:44:46.762247 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:44:46.763838 | orchestrator | 2025-06-22 19:44:46.764590 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:44:46.766851 | orchestrator | 2025-06-22 19:44:46.770338 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:44:46.770379 | orchestrator | Sunday 22 June 2025 19:44:46 +0000 (0:00:01.426) 0:00:02.892 *********** 2025-06-22 19:44:51.295419 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:44:51.296035 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:44:51.296926 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:44:51.297703 | orchestrator | ok: [testbed-manager] 2025-06-22 19:44:51.298930 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:44:51.300171 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:44:51.300593 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:44:51.301660 | orchestrator | 2025-06-22 19:44:51.302129 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 19:44:51.303701 | orchestrator | 2025-06-22 19:44:51.304403 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 19:44:51.305352 | orchestrator | Sunday 22 June 2025 19:44:51 +0000 (0:00:04.541) 0:00:07.433 *********** 2025-06-22 19:44:51.445640 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:44:51.519963 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:44:51.600989 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:44:51.682896 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:44:51.779862 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:44:51.829142 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:44:51.829231 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:44:51.829897 | orchestrator | 2025-06-22 19:44:51.831374 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:44:51.832609 | orchestrator | 2025-06-22 19:44:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:44:51.832660 | orchestrator | 2025-06-22 19:44:51 | INFO  | Please wait and do not abort execution. 2025-06-22 19:44:51.833525 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:44:51.833569 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:44:51.833949 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:44:51.836719 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:44:51.837281 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:44:51.838169 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:44:51.838291 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:44:51.838705 | orchestrator | 2025-06-22 19:44:51.839321 | orchestrator | 2025-06-22 19:44:51.839719 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:44:51.840010 | orchestrator | Sunday 22 June 2025 19:44:51 +0000 (0:00:00.525) 0:00:07.959 *********** 2025-06-22 19:44:51.840542 | orchestrator | =============================================================================== 2025-06-22 19:44:51.840824 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.54s 2025-06-22 19:44:51.841272 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.43s 2025-06-22 19:44:51.841713 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2025-06-22 19:44:51.842085 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-06-22 19:44:54.094912 | orchestrator | 2025-06-22 19:44:54 | INFO  | Task 91d44c37-f41a-4a90-a42b-6daaee57153b (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-22 19:44:54.095019 | orchestrator | 2025-06-22 19:44:54 | INFO  | It takes a moment until task 91d44c37-f41a-4a90-a42b-6daaee57153b (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-22 19:44:58.443866 | orchestrator | 2025-06-22 19:44:58.443985 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-22 19:44:58.444543 | orchestrator | 2025-06-22 19:44:58.444710 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:44:58.445745 | orchestrator | Sunday 22 June 2025 19:44:58 +0000 (0:00:00.323) 0:00:00.323 *********** 2025-06-22 19:44:58.736452 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 19:44:58.738370 | orchestrator | 2025-06-22 19:44:58.739930 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:44:58.740858 | orchestrator | Sunday 22 June 2025 19:44:58 +0000 (0:00:00.296) 0:00:00.620 *********** 2025-06-22 19:44:59.017951 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:44:59.018102 | orchestrator | 2025-06-22 19:44:59.018119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:44:59.018370 | orchestrator | Sunday 22 June 2025 19:44:59 +0000 (0:00:00.279) 0:00:00.899 *********** 2025-06-22 19:44:59.463220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-22 19:44:59.464751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-22 19:44:59.467463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-22 19:44:59.468434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-22 19:44:59.469563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-22 19:44:59.470486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-22 19:44:59.471372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-22 19:44:59.472419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-22 19:44:59.472875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-22 19:44:59.475038 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-22 19:44:59.475660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-22 19:44:59.476212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-22 19:44:59.476828 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-22 19:44:59.477854 | orchestrator | 2025-06-22 19:44:59.478123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:44:59.478560 | orchestrator | Sunday 22 June 2025 19:44:59 +0000 (0:00:00.445) 0:00:01.345 *********** 2025-06-22 19:44:59.981219 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:44:59.982086 | orchestrator | 2025-06-22 19:44:59.984825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:44:59.987537 | orchestrator | Sunday 22 June 2025 19:44:59 +0000 (0:00:00.519) 0:00:01.865 *********** 2025-06-22 19:45:00.194152 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:00.194284 | orchestrator | 2025-06-22 19:45:00.195149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:00.196903 | orchestrator | Sunday 22 June 2025 19:45:00 +0000 (0:00:00.210) 0:00:02.075 *********** 2025-06-22 19:45:00.427060 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:00.430082 | orchestrator | 2025-06-22 19:45:00.431690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:00.432626 | orchestrator | Sunday 22 June 2025 19:45:00 +0000 (0:00:00.235) 0:00:02.311 *********** 2025-06-22 19:45:00.628947 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:00.629098 | orchestrator | 2025-06-22 19:45:00.629672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:00.630227 | orchestrator | Sunday 22 June 2025 19:45:00 +0000 (0:00:00.200) 0:00:02.511 *********** 2025-06-22 19:45:00.842306 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:00.843344 | orchestrator | 2025-06-22 19:45:00.844332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:00.845895 | orchestrator | Sunday 22 June 2025 19:45:00 +0000 (0:00:00.214) 0:00:02.726 *********** 2025-06-22 19:45:01.086421 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:01.086520 | orchestrator | 2025-06-22 19:45:01.087648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:01.089705 | orchestrator | Sunday 22 June 2025 19:45:01 +0000 (0:00:00.243) 0:00:02.969 *********** 2025-06-22 19:45:01.319238 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:01.321037 | orchestrator | 2025-06-22 19:45:01.323370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:01.323411 | orchestrator | Sunday 22 June 2025 19:45:01 +0000 (0:00:00.231) 0:00:03.200 *********** 2025-06-22 19:45:01.523935 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:01.524105 | orchestrator | 2025-06-22 19:45:01.525290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:01.526136 | orchestrator | Sunday 22 June 2025 19:45:01 +0000 (0:00:00.204) 0:00:03.405 *********** 2025-06-22 19:45:01.993501 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111) 2025-06-22 19:45:01.995993 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111) 2025-06-22 19:45:01.997375 | orchestrator | 2025-06-22 19:45:01.997618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:01.998319 | orchestrator | Sunday 22 June 2025 19:45:01 +0000 (0:00:00.470) 0:00:03.875 *********** 2025-06-22 19:45:02.401191 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f) 2025-06-22 19:45:02.401309 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f) 2025-06-22 19:45:02.401326 | orchestrator | 2025-06-22 19:45:02.403031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:02.403060 | orchestrator | Sunday 22 June 2025 19:45:02 +0000 (0:00:00.408) 0:00:04.284 *********** 2025-06-22 19:45:03.071023 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f3f79088-b1f4-4694-a8b0-38e1aef3e3c0) 2025-06-22 19:45:03.071559 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f3f79088-b1f4-4694-a8b0-38e1aef3e3c0) 2025-06-22 19:45:03.072851 | orchestrator | 2025-06-22 19:45:03.073812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:03.075189 | orchestrator | Sunday 22 June 2025 19:45:03 +0000 (0:00:00.670) 0:00:04.955 *********** 2025-06-22 19:45:03.705033 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d) 2025-06-22 19:45:03.705521 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d) 2025-06-22 19:45:03.710127 | orchestrator | 2025-06-22 19:45:03.710166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:03.712634 | orchestrator | Sunday 22 June 2025 19:45:03 +0000 (0:00:00.631) 0:00:05.586 *********** 2025-06-22 19:45:04.492593 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:45:04.492749 | orchestrator | 2025-06-22 19:45:04.497156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:04.497201 | orchestrator | Sunday 22 June 2025 19:45:04 +0000 (0:00:00.787) 0:00:06.373 *********** 2025-06-22 19:45:04.880912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-22 19:45:04.881851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-22 19:45:04.883058 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-22 19:45:04.884413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-22 19:45:04.886904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-22 19:45:04.888146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-22 19:45:04.889025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-22 19:45:04.891311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-22 19:45:04.891344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-22 19:45:04.891356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-22 19:45:04.892201 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-22 19:45:04.893217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-22 19:45:04.894007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-22 19:45:04.894932 | orchestrator | 2025-06-22 19:45:04.895498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:04.896542 | orchestrator | Sunday 22 June 2025 19:45:04 +0000 (0:00:00.391) 0:00:06.765 *********** 2025-06-22 19:45:05.091582 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:05.093605 | orchestrator | 2025-06-22 19:45:05.095284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:05.099493 | orchestrator | Sunday 22 June 2025 19:45:05 +0000 (0:00:00.208) 0:00:06.974 *********** 2025-06-22 19:45:05.336709 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:05.338114 | orchestrator | 2025-06-22 19:45:05.339592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:05.340670 | orchestrator | Sunday 22 June 2025 19:45:05 +0000 (0:00:00.242) 0:00:07.217 *********** 2025-06-22 19:45:05.549021 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:05.550498 | orchestrator | 2025-06-22 19:45:05.551527 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:05.552429 | orchestrator | Sunday 22 June 2025 19:45:05 +0000 (0:00:00.216) 0:00:07.433 *********** 2025-06-22 19:45:05.757399 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:05.758269 | orchestrator | 2025-06-22 19:45:05.759418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:05.760389 | orchestrator | Sunday 22 June 2025 19:45:05 +0000 (0:00:00.208) 0:00:07.641 *********** 2025-06-22 19:45:05.966323 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:05.969578 | orchestrator | 2025-06-22 19:45:05.969609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:05.969624 | orchestrator | Sunday 22 June 2025 19:45:05 +0000 (0:00:00.205) 0:00:07.847 *********** 2025-06-22 19:45:06.195216 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:06.196077 | orchestrator | 2025-06-22 19:45:06.197238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:06.198242 | orchestrator | Sunday 22 June 2025 19:45:06 +0000 (0:00:00.231) 0:00:08.078 *********** 2025-06-22 19:45:06.411256 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:06.416164 | orchestrator | 2025-06-22 19:45:06.416714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:06.417393 | orchestrator | Sunday 22 June 2025 19:45:06 +0000 (0:00:00.212) 0:00:08.291 *********** 2025-06-22 19:45:06.592134 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:06.592749 | orchestrator | 2025-06-22 19:45:06.593651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:06.594202 | orchestrator | Sunday 22 June 2025 19:45:06 +0000 (0:00:00.183) 0:00:08.475 *********** 2025-06-22 19:45:07.803195 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-22 19:45:07.803416 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-22 19:45:07.803856 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-22 19:45:07.805063 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-22 19:45:07.805939 | orchestrator | 2025-06-22 19:45:07.806152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:07.807119 | orchestrator | Sunday 22 June 2025 19:45:07 +0000 (0:00:01.212) 0:00:09.687 *********** 2025-06-22 19:45:08.005509 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:08.005744 | orchestrator | 2025-06-22 19:45:08.007200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:08.010845 | orchestrator | Sunday 22 June 2025 19:45:07 +0000 (0:00:00.201) 0:00:09.888 *********** 2025-06-22 19:45:08.206071 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:08.206683 | orchestrator | 2025-06-22 19:45:08.208510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:08.209441 | orchestrator | Sunday 22 June 2025 19:45:08 +0000 (0:00:00.201) 0:00:10.090 *********** 2025-06-22 19:45:08.421162 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:08.421956 | orchestrator | 2025-06-22 19:45:08.422908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:08.423366 | orchestrator | Sunday 22 June 2025 19:45:08 +0000 (0:00:00.214) 0:00:10.305 *********** 2025-06-22 19:45:08.640316 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:08.646394 | orchestrator | 2025-06-22 19:45:08.653889 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-22 19:45:08.654697 | orchestrator | Sunday 22 June 2025 19:45:08 +0000 (0:00:00.219) 0:00:10.524 *********** 2025-06-22 19:45:08.844963 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-22 19:45:08.847596 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-22 19:45:08.848533 | orchestrator | 2025-06-22 19:45:08.852168 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-22 19:45:08.852934 | orchestrator | Sunday 22 June 2025 19:45:08 +0000 (0:00:00.203) 0:00:10.728 *********** 2025-06-22 19:45:08.979866 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:08.980032 | orchestrator | 2025-06-22 19:45:08.980452 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-22 19:45:08.981086 | orchestrator | Sunday 22 June 2025 19:45:08 +0000 (0:00:00.136) 0:00:10.864 *********** 2025-06-22 19:45:09.113665 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:09.114217 | orchestrator | 2025-06-22 19:45:09.116850 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-22 19:45:09.122595 | orchestrator | Sunday 22 June 2025 19:45:09 +0000 (0:00:00.133) 0:00:10.998 *********** 2025-06-22 19:45:09.277877 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:09.279467 | orchestrator | 2025-06-22 19:45:09.280217 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-22 19:45:09.280965 | orchestrator | Sunday 22 June 2025 19:45:09 +0000 (0:00:00.163) 0:00:11.161 *********** 2025-06-22 19:45:09.454345 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:45:09.455536 | orchestrator | 2025-06-22 19:45:09.456976 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-22 19:45:09.458198 | orchestrator | Sunday 22 June 2025 19:45:09 +0000 (0:00:00.175) 0:00:11.336 *********** 2025-06-22 19:45:09.645057 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f4df137-04dd-5f0e-acd7-f62ec38375b4'}}) 2025-06-22 19:45:09.646073 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5c0aa592-9340-5775-8ceb-7aef1759a79b'}}) 2025-06-22 19:45:09.647274 | orchestrator | 2025-06-22 19:45:09.648226 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-22 19:45:09.649078 | orchestrator | Sunday 22 June 2025 19:45:09 +0000 (0:00:00.192) 0:00:11.529 *********** 2025-06-22 19:45:09.811621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f4df137-04dd-5f0e-acd7-f62ec38375b4'}})  2025-06-22 19:45:09.812711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5c0aa592-9340-5775-8ceb-7aef1759a79b'}})  2025-06-22 19:45:09.813272 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:09.814712 | orchestrator | 2025-06-22 19:45:09.815894 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-22 19:45:09.816505 | orchestrator | Sunday 22 June 2025 19:45:09 +0000 (0:00:00.167) 0:00:11.696 *********** 2025-06-22 19:45:10.313042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f4df137-04dd-5f0e-acd7-f62ec38375b4'}})  2025-06-22 19:45:10.314605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5c0aa592-9340-5775-8ceb-7aef1759a79b'}})  2025-06-22 19:45:10.316111 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:10.318206 | orchestrator | 2025-06-22 19:45:10.322704 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-22 19:45:10.323009 | orchestrator | Sunday 22 June 2025 19:45:10 +0000 (0:00:00.500) 0:00:12.197 *********** 2025-06-22 19:45:10.486902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f4df137-04dd-5f0e-acd7-f62ec38375b4'}})  2025-06-22 19:45:10.488585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5c0aa592-9340-5775-8ceb-7aef1759a79b'}})  2025-06-22 19:45:10.488860 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:10.490003 | orchestrator | 2025-06-22 19:45:10.490474 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-22 19:45:10.492346 | orchestrator | Sunday 22 June 2025 19:45:10 +0000 (0:00:00.174) 0:00:12.371 *********** 2025-06-22 19:45:10.679028 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:45:10.680608 | orchestrator | 2025-06-22 19:45:10.681771 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-22 19:45:10.684575 | orchestrator | Sunday 22 June 2025 19:45:10 +0000 (0:00:00.187) 0:00:12.559 *********** 2025-06-22 19:45:10.845878 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:45:10.846439 | orchestrator | 2025-06-22 19:45:10.846693 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-22 19:45:10.847230 | orchestrator | Sunday 22 June 2025 19:45:10 +0000 (0:00:00.171) 0:00:12.730 *********** 2025-06-22 19:45:11.010082 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:11.010284 | orchestrator | 2025-06-22 19:45:11.011114 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-22 19:45:11.011754 | orchestrator | Sunday 22 June 2025 19:45:11 +0000 (0:00:00.163) 0:00:12.894 *********** 2025-06-22 19:45:11.187304 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:11.189205 | orchestrator | 2025-06-22 19:45:11.189247 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-22 19:45:11.192082 | orchestrator | Sunday 22 June 2025 19:45:11 +0000 (0:00:00.175) 0:00:13.069 *********** 2025-06-22 19:45:11.326671 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:11.327550 | orchestrator | 2025-06-22 19:45:11.329091 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-22 19:45:11.329547 | orchestrator | Sunday 22 June 2025 19:45:11 +0000 (0:00:00.137) 0:00:13.207 *********** 2025-06-22 19:45:11.504155 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:45:11.504859 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:45:11.508050 | orchestrator |  "sdb": { 2025-06-22 19:45:11.511375 | orchestrator |  "osd_lvm_uuid": "9f4df137-04dd-5f0e-acd7-f62ec38375b4" 2025-06-22 19:45:11.512924 | orchestrator |  }, 2025-06-22 19:45:11.514012 | orchestrator |  "sdc": { 2025-06-22 19:45:11.515010 | orchestrator |  "osd_lvm_uuid": "5c0aa592-9340-5775-8ceb-7aef1759a79b" 2025-06-22 19:45:11.516411 | orchestrator |  } 2025-06-22 19:45:11.517287 | orchestrator |  } 2025-06-22 19:45:11.518348 | orchestrator | } 2025-06-22 19:45:11.519185 | orchestrator | 2025-06-22 19:45:11.519963 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-22 19:45:11.521301 | orchestrator | Sunday 22 June 2025 19:45:11 +0000 (0:00:00.179) 0:00:13.386 *********** 2025-06-22 19:45:11.649482 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:11.649713 | orchestrator | 2025-06-22 19:45:11.651274 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-22 19:45:11.651313 | orchestrator | Sunday 22 June 2025 19:45:11 +0000 (0:00:00.145) 0:00:13.532 *********** 2025-06-22 19:45:11.839553 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:11.841412 | orchestrator | 2025-06-22 19:45:11.841976 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-22 19:45:11.842668 | orchestrator | Sunday 22 June 2025 19:45:11 +0000 (0:00:00.191) 0:00:13.724 *********** 2025-06-22 19:45:11.986216 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:11.986634 | orchestrator | 2025-06-22 19:45:11.987216 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-22 19:45:11.987721 | orchestrator | Sunday 22 June 2025 19:45:11 +0000 (0:00:00.145) 0:00:13.870 *********** 2025-06-22 19:45:12.187103 | orchestrator | changed: [testbed-node-3] => { 2025-06-22 19:45:12.188858 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-22 19:45:12.189973 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:45:12.194304 | orchestrator |  "sdb": { 2025-06-22 19:45:12.194878 | orchestrator |  "osd_lvm_uuid": "9f4df137-04dd-5f0e-acd7-f62ec38375b4" 2025-06-22 19:45:12.196413 | orchestrator |  }, 2025-06-22 19:45:12.200727 | orchestrator |  "sdc": { 2025-06-22 19:45:12.201691 | orchestrator |  "osd_lvm_uuid": "5c0aa592-9340-5775-8ceb-7aef1759a79b" 2025-06-22 19:45:12.203459 | orchestrator |  } 2025-06-22 19:45:12.203935 | orchestrator |  }, 2025-06-22 19:45:12.205054 | orchestrator |  "lvm_volumes": [ 2025-06-22 19:45:12.205876 | orchestrator |  { 2025-06-22 19:45:12.207486 | orchestrator |  "data": "osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4", 2025-06-22 19:45:12.209699 | orchestrator |  "data_vg": "ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4" 2025-06-22 19:45:12.209740 | orchestrator |  }, 2025-06-22 19:45:12.213618 | orchestrator |  { 2025-06-22 19:45:12.214857 | orchestrator |  "data": "osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b", 2025-06-22 19:45:12.215185 | orchestrator |  "data_vg": "ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b" 2025-06-22 19:45:12.216072 | orchestrator |  } 2025-06-22 19:45:12.216586 | orchestrator |  ] 2025-06-22 19:45:12.216955 | orchestrator |  } 2025-06-22 19:45:12.219607 | orchestrator | } 2025-06-22 19:45:12.219904 | orchestrator | 2025-06-22 19:45:12.220509 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-22 19:45:12.220917 | orchestrator | Sunday 22 June 2025 19:45:12 +0000 (0:00:00.199) 0:00:14.070 *********** 2025-06-22 19:45:14.539623 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 19:45:14.539724 | orchestrator | 2025-06-22 19:45:14.540058 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-22 19:45:14.540538 | orchestrator | 2025-06-22 19:45:14.544663 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:45:14.544903 | orchestrator | Sunday 22 June 2025 19:45:14 +0000 (0:00:02.351) 0:00:16.421 *********** 2025-06-22 19:45:14.800475 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-22 19:45:14.801718 | orchestrator | 2025-06-22 19:45:14.802408 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:45:14.803584 | orchestrator | Sunday 22 June 2025 19:45:14 +0000 (0:00:00.262) 0:00:16.683 *********** 2025-06-22 19:45:15.055704 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:45:15.056541 | orchestrator | 2025-06-22 19:45:15.056930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:15.057684 | orchestrator | Sunday 22 June 2025 19:45:15 +0000 (0:00:00.253) 0:00:16.937 *********** 2025-06-22 19:45:15.491241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-22 19:45:15.491735 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-22 19:45:15.493348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-22 19:45:15.496664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-22 19:45:15.496707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-22 19:45:15.496719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-22 19:45:15.496731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-22 19:45:15.499503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-22 19:45:15.499592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-22 19:45:15.499604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-22 19:45:15.499615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-22 19:45:15.499626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-22 19:45:15.500927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-22 19:45:15.502096 | orchestrator | 2025-06-22 19:45:15.503219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:15.505385 | orchestrator | Sunday 22 June 2025 19:45:15 +0000 (0:00:00.437) 0:00:17.375 *********** 2025-06-22 19:45:15.710956 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:15.711844 | orchestrator | 2025-06-22 19:45:15.713024 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:15.714310 | orchestrator | Sunday 22 June 2025 19:45:15 +0000 (0:00:00.218) 0:00:17.593 *********** 2025-06-22 19:45:15.914314 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:15.914412 | orchestrator | 2025-06-22 19:45:15.914429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:15.914442 | orchestrator | Sunday 22 June 2025 19:45:15 +0000 (0:00:00.203) 0:00:17.796 *********** 2025-06-22 19:45:16.107863 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:16.110279 | orchestrator | 2025-06-22 19:45:16.111866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:16.115860 | orchestrator | Sunday 22 June 2025 19:45:16 +0000 (0:00:00.194) 0:00:17.991 *********** 2025-06-22 19:45:16.314309 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:16.315869 | orchestrator | 2025-06-22 19:45:16.317989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:16.320091 | orchestrator | Sunday 22 June 2025 19:45:16 +0000 (0:00:00.206) 0:00:18.197 *********** 2025-06-22 19:45:16.960192 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:16.960295 | orchestrator | 2025-06-22 19:45:16.960967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:16.962002 | orchestrator | Sunday 22 June 2025 19:45:16 +0000 (0:00:00.645) 0:00:18.843 *********** 2025-06-22 19:45:17.157652 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:17.159419 | orchestrator | 2025-06-22 19:45:17.160673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:17.161126 | orchestrator | Sunday 22 June 2025 19:45:17 +0000 (0:00:00.198) 0:00:19.042 *********** 2025-06-22 19:45:17.402547 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:17.405499 | orchestrator | 2025-06-22 19:45:17.407226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:17.409153 | orchestrator | Sunday 22 June 2025 19:45:17 +0000 (0:00:00.243) 0:00:19.285 *********** 2025-06-22 19:45:17.686622 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:17.687012 | orchestrator | 2025-06-22 19:45:17.690108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:17.690180 | orchestrator | Sunday 22 June 2025 19:45:17 +0000 (0:00:00.281) 0:00:19.567 *********** 2025-06-22 19:45:18.133588 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354) 2025-06-22 19:45:18.135322 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354) 2025-06-22 19:45:18.137558 | orchestrator | 2025-06-22 19:45:18.137709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:18.138775 | orchestrator | Sunday 22 June 2025 19:45:18 +0000 (0:00:00.449) 0:00:20.017 *********** 2025-06-22 19:45:18.569830 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a6d93ccb-5091-4fbc-bc32-8344f81d146e) 2025-06-22 19:45:18.572976 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a6d93ccb-5091-4fbc-bc32-8344f81d146e) 2025-06-22 19:45:18.573212 | orchestrator | 2025-06-22 19:45:18.575102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:18.576154 | orchestrator | Sunday 22 June 2025 19:45:18 +0000 (0:00:00.435) 0:00:20.452 *********** 2025-06-22 19:45:19.001720 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4258f07d-32b4-4c40-a297-43ff401da985) 2025-06-22 19:45:19.004007 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4258f07d-32b4-4c40-a297-43ff401da985) 2025-06-22 19:45:19.005681 | orchestrator | 2025-06-22 19:45:19.008099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:19.009896 | orchestrator | Sunday 22 June 2025 19:45:18 +0000 (0:00:00.430) 0:00:20.883 *********** 2025-06-22 19:45:19.462733 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_626bb53b-03fa-4cf2-9c74-01c88e74436c) 2025-06-22 19:45:19.463960 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_626bb53b-03fa-4cf2-9c74-01c88e74436c) 2025-06-22 19:45:19.464523 | orchestrator | 2025-06-22 19:45:19.465295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:19.466013 | orchestrator | Sunday 22 June 2025 19:45:19 +0000 (0:00:00.463) 0:00:21.347 *********** 2025-06-22 19:45:19.775973 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:45:19.776170 | orchestrator | 2025-06-22 19:45:19.776236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:19.781469 | orchestrator | Sunday 22 June 2025 19:45:19 +0000 (0:00:00.310) 0:00:21.657 *********** 2025-06-22 19:45:20.185258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-22 19:45:20.186531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-22 19:45:20.187708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-22 19:45:20.191605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-22 19:45:20.192225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-22 19:45:20.192718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-22 19:45:20.193249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-22 19:45:20.193825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-22 19:45:20.194357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-22 19:45:20.194985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-22 19:45:20.195878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-22 19:45:20.196933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-22 19:45:20.197319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-22 19:45:20.198933 | orchestrator | 2025-06-22 19:45:20.199299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:20.200077 | orchestrator | Sunday 22 June 2025 19:45:20 +0000 (0:00:00.411) 0:00:22.069 *********** 2025-06-22 19:45:20.371221 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:20.372606 | orchestrator | 2025-06-22 19:45:20.373210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:20.373905 | orchestrator | Sunday 22 June 2025 19:45:20 +0000 (0:00:00.186) 0:00:22.255 *********** 2025-06-22 19:45:21.096726 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:21.098958 | orchestrator | 2025-06-22 19:45:21.102308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:21.103409 | orchestrator | Sunday 22 June 2025 19:45:21 +0000 (0:00:00.724) 0:00:22.980 *********** 2025-06-22 19:45:21.305059 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:21.306659 | orchestrator | 2025-06-22 19:45:21.310847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:21.310894 | orchestrator | Sunday 22 June 2025 19:45:21 +0000 (0:00:00.207) 0:00:23.188 *********** 2025-06-22 19:45:21.513560 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:21.516174 | orchestrator | 2025-06-22 19:45:21.516262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:21.516335 | orchestrator | Sunday 22 June 2025 19:45:21 +0000 (0:00:00.208) 0:00:23.396 *********** 2025-06-22 19:45:21.725310 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:21.727589 | orchestrator | 2025-06-22 19:45:21.728643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:21.728795 | orchestrator | Sunday 22 June 2025 19:45:21 +0000 (0:00:00.211) 0:00:23.608 *********** 2025-06-22 19:45:21.913329 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:21.913653 | orchestrator | 2025-06-22 19:45:21.914415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:21.915564 | orchestrator | Sunday 22 June 2025 19:45:21 +0000 (0:00:00.191) 0:00:23.799 *********** 2025-06-22 19:45:22.092334 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:22.096046 | orchestrator | 2025-06-22 19:45:22.096098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:22.096599 | orchestrator | Sunday 22 June 2025 19:45:22 +0000 (0:00:00.176) 0:00:23.976 *********** 2025-06-22 19:45:22.335768 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:22.338159 | orchestrator | 2025-06-22 19:45:22.338389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:22.339588 | orchestrator | Sunday 22 June 2025 19:45:22 +0000 (0:00:00.243) 0:00:24.220 *********** 2025-06-22 19:45:22.936662 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-22 19:45:22.938603 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-22 19:45:22.938632 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-22 19:45:22.939158 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-22 19:45:22.940910 | orchestrator | 2025-06-22 19:45:22.942122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:22.943058 | orchestrator | Sunday 22 June 2025 19:45:22 +0000 (0:00:00.598) 0:00:24.818 *********** 2025-06-22 19:45:23.129632 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:23.131195 | orchestrator | 2025-06-22 19:45:23.131337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:23.131364 | orchestrator | Sunday 22 June 2025 19:45:23 +0000 (0:00:00.195) 0:00:25.013 *********** 2025-06-22 19:45:23.308217 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:23.309792 | orchestrator | 2025-06-22 19:45:23.309893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:23.310463 | orchestrator | Sunday 22 June 2025 19:45:23 +0000 (0:00:00.179) 0:00:25.192 *********** 2025-06-22 19:45:23.472112 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:23.472364 | orchestrator | 2025-06-22 19:45:23.472788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:23.473566 | orchestrator | Sunday 22 June 2025 19:45:23 +0000 (0:00:00.161) 0:00:25.354 *********** 2025-06-22 19:45:23.657669 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:23.657757 | orchestrator | 2025-06-22 19:45:23.657774 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-22 19:45:23.659309 | orchestrator | Sunday 22 June 2025 19:45:23 +0000 (0:00:00.185) 0:00:25.539 *********** 2025-06-22 19:45:23.952563 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-22 19:45:23.952647 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-22 19:45:23.953474 | orchestrator | 2025-06-22 19:45:23.955179 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-22 19:45:23.956135 | orchestrator | Sunday 22 June 2025 19:45:23 +0000 (0:00:00.293) 0:00:25.833 *********** 2025-06-22 19:45:24.082224 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:24.085059 | orchestrator | 2025-06-22 19:45:24.085092 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-22 19:45:24.086440 | orchestrator | Sunday 22 June 2025 19:45:24 +0000 (0:00:00.132) 0:00:25.965 *********** 2025-06-22 19:45:24.199934 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:24.201729 | orchestrator | 2025-06-22 19:45:24.201763 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-22 19:45:24.202732 | orchestrator | Sunday 22 June 2025 19:45:24 +0000 (0:00:00.118) 0:00:26.083 *********** 2025-06-22 19:45:24.328120 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:24.328493 | orchestrator | 2025-06-22 19:45:24.330077 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-22 19:45:24.331063 | orchestrator | Sunday 22 June 2025 19:45:24 +0000 (0:00:00.127) 0:00:26.211 *********** 2025-06-22 19:45:24.466685 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:45:24.467001 | orchestrator | 2025-06-22 19:45:24.468449 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-22 19:45:24.469501 | orchestrator | Sunday 22 June 2025 19:45:24 +0000 (0:00:00.139) 0:00:26.350 *********** 2025-06-22 19:45:24.606274 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b7d3102c-a914-5a7b-b709-ad20b0d5984a'}}) 2025-06-22 19:45:24.607847 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0c557b89-2e3b-5795-aff3-9e4ccad52f24'}}) 2025-06-22 19:45:24.609557 | orchestrator | 2025-06-22 19:45:24.610744 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-22 19:45:24.611872 | orchestrator | Sunday 22 June 2025 19:45:24 +0000 (0:00:00.140) 0:00:26.490 *********** 2025-06-22 19:45:24.752326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b7d3102c-a914-5a7b-b709-ad20b0d5984a'}})  2025-06-22 19:45:24.752567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0c557b89-2e3b-5795-aff3-9e4ccad52f24'}})  2025-06-22 19:45:24.756118 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:24.756260 | orchestrator | 2025-06-22 19:45:24.756404 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-22 19:45:24.756890 | orchestrator | Sunday 22 June 2025 19:45:24 +0000 (0:00:00.146) 0:00:26.636 *********** 2025-06-22 19:45:24.888712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b7d3102c-a914-5a7b-b709-ad20b0d5984a'}})  2025-06-22 19:45:24.888930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0c557b89-2e3b-5795-aff3-9e4ccad52f24'}})  2025-06-22 19:45:24.889440 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:24.890842 | orchestrator | 2025-06-22 19:45:24.891486 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-22 19:45:24.891750 | orchestrator | Sunday 22 June 2025 19:45:24 +0000 (0:00:00.135) 0:00:26.771 *********** 2025-06-22 19:45:25.014695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b7d3102c-a914-5a7b-b709-ad20b0d5984a'}})  2025-06-22 19:45:25.015208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0c557b89-2e3b-5795-aff3-9e4ccad52f24'}})  2025-06-22 19:45:25.020142 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:25.020577 | orchestrator | 2025-06-22 19:45:25.021535 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-22 19:45:25.022342 | orchestrator | Sunday 22 June 2025 19:45:25 +0000 (0:00:00.126) 0:00:26.898 *********** 2025-06-22 19:45:25.143908 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:45:25.144965 | orchestrator | 2025-06-22 19:45:25.146162 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-22 19:45:25.147204 | orchestrator | Sunday 22 June 2025 19:45:25 +0000 (0:00:00.129) 0:00:27.028 *********** 2025-06-22 19:45:25.274650 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:45:25.275927 | orchestrator | 2025-06-22 19:45:25.277019 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-22 19:45:25.278473 | orchestrator | Sunday 22 June 2025 19:45:25 +0000 (0:00:00.131) 0:00:27.159 *********** 2025-06-22 19:45:25.394284 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:25.395109 | orchestrator | 2025-06-22 19:45:25.395930 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-22 19:45:25.397190 | orchestrator | Sunday 22 June 2025 19:45:25 +0000 (0:00:00.119) 0:00:27.279 *********** 2025-06-22 19:45:25.644902 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:25.646491 | orchestrator | 2025-06-22 19:45:25.646520 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-22 19:45:25.647095 | orchestrator | Sunday 22 June 2025 19:45:25 +0000 (0:00:00.248) 0:00:27.528 *********** 2025-06-22 19:45:25.767190 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:25.767759 | orchestrator | 2025-06-22 19:45:25.768145 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-22 19:45:25.769078 | orchestrator | Sunday 22 June 2025 19:45:25 +0000 (0:00:00.122) 0:00:27.650 *********** 2025-06-22 19:45:25.895567 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:45:25.897148 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:45:25.897766 | orchestrator |  "sdb": { 2025-06-22 19:45:25.899376 | orchestrator |  "osd_lvm_uuid": "b7d3102c-a914-5a7b-b709-ad20b0d5984a" 2025-06-22 19:45:25.899701 | orchestrator |  }, 2025-06-22 19:45:25.900336 | orchestrator |  "sdc": { 2025-06-22 19:45:25.900900 | orchestrator |  "osd_lvm_uuid": "0c557b89-2e3b-5795-aff3-9e4ccad52f24" 2025-06-22 19:45:25.902573 | orchestrator |  } 2025-06-22 19:45:25.903876 | orchestrator |  } 2025-06-22 19:45:25.904964 | orchestrator | } 2025-06-22 19:45:25.905899 | orchestrator | 2025-06-22 19:45:25.906577 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-22 19:45:25.906890 | orchestrator | Sunday 22 June 2025 19:45:25 +0000 (0:00:00.128) 0:00:27.779 *********** 2025-06-22 19:45:26.021737 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:26.023105 | orchestrator | 2025-06-22 19:45:26.024053 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-22 19:45:26.024718 | orchestrator | Sunday 22 June 2025 19:45:26 +0000 (0:00:00.126) 0:00:27.906 *********** 2025-06-22 19:45:26.138238 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:26.138319 | orchestrator | 2025-06-22 19:45:26.138416 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-22 19:45:26.138450 | orchestrator | Sunday 22 June 2025 19:45:26 +0000 (0:00:00.117) 0:00:28.023 *********** 2025-06-22 19:45:26.268303 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:26.268884 | orchestrator | 2025-06-22 19:45:26.269315 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-22 19:45:26.269579 | orchestrator | Sunday 22 June 2025 19:45:26 +0000 (0:00:00.126) 0:00:28.150 *********** 2025-06-22 19:45:26.479211 | orchestrator | changed: [testbed-node-4] => { 2025-06-22 19:45:26.479308 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-22 19:45:26.479323 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:45:26.479334 | orchestrator |  "sdb": { 2025-06-22 19:45:26.479346 | orchestrator |  "osd_lvm_uuid": "b7d3102c-a914-5a7b-b709-ad20b0d5984a" 2025-06-22 19:45:26.479358 | orchestrator |  }, 2025-06-22 19:45:26.479734 | orchestrator |  "sdc": { 2025-06-22 19:45:26.479757 | orchestrator |  "osd_lvm_uuid": "0c557b89-2e3b-5795-aff3-9e4ccad52f24" 2025-06-22 19:45:26.481029 | orchestrator |  } 2025-06-22 19:45:26.482276 | orchestrator |  }, 2025-06-22 19:45:26.482590 | orchestrator |  "lvm_volumes": [ 2025-06-22 19:45:26.483832 | orchestrator |  { 2025-06-22 19:45:26.484713 | orchestrator |  "data": "osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a", 2025-06-22 19:45:26.485408 | orchestrator |  "data_vg": "ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a" 2025-06-22 19:45:26.485926 | orchestrator |  }, 2025-06-22 19:45:26.487520 | orchestrator |  { 2025-06-22 19:45:26.488036 | orchestrator |  "data": "osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24", 2025-06-22 19:45:26.488911 | orchestrator |  "data_vg": "ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24" 2025-06-22 19:45:26.489380 | orchestrator |  } 2025-06-22 19:45:26.490339 | orchestrator |  ] 2025-06-22 19:45:26.490606 | orchestrator |  } 2025-06-22 19:45:26.491555 | orchestrator | } 2025-06-22 19:45:26.492014 | orchestrator | 2025-06-22 19:45:26.492917 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-22 19:45:26.493218 | orchestrator | Sunday 22 June 2025 19:45:26 +0000 (0:00:00.205) 0:00:28.355 *********** 2025-06-22 19:45:27.437517 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-22 19:45:27.438735 | orchestrator | 2025-06-22 19:45:27.440205 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-22 19:45:27.441006 | orchestrator | 2025-06-22 19:45:27.442415 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:45:27.443595 | orchestrator | Sunday 22 June 2025 19:45:27 +0000 (0:00:00.965) 0:00:29.321 *********** 2025-06-22 19:45:27.840752 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-22 19:45:27.841772 | orchestrator | 2025-06-22 19:45:27.842999 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:45:27.843880 | orchestrator | Sunday 22 June 2025 19:45:27 +0000 (0:00:00.402) 0:00:29.724 *********** 2025-06-22 19:45:28.359259 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:45:28.361069 | orchestrator | 2025-06-22 19:45:28.364609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:28.364644 | orchestrator | Sunday 22 June 2025 19:45:28 +0000 (0:00:00.519) 0:00:30.244 *********** 2025-06-22 19:45:28.700707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-22 19:45:28.702462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-22 19:45:28.705618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-22 19:45:28.705642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-22 19:45:28.707580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-22 19:45:28.709998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-22 19:45:28.711779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-22 19:45:28.712529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-22 19:45:28.713587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-22 19:45:28.714457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-22 19:45:28.715097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-22 19:45:28.717173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-22 19:45:28.718005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-22 19:45:28.718734 | orchestrator | 2025-06-22 19:45:28.719634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:28.719919 | orchestrator | Sunday 22 June 2025 19:45:28 +0000 (0:00:00.341) 0:00:30.585 *********** 2025-06-22 19:45:28.889208 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:28.890385 | orchestrator | 2025-06-22 19:45:28.891465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:28.892603 | orchestrator | Sunday 22 June 2025 19:45:28 +0000 (0:00:00.186) 0:00:30.771 *********** 2025-06-22 19:45:29.073010 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:29.073092 | orchestrator | 2025-06-22 19:45:29.074275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:29.074794 | orchestrator | Sunday 22 June 2025 19:45:29 +0000 (0:00:00.183) 0:00:30.955 *********** 2025-06-22 19:45:29.259037 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:29.260617 | orchestrator | 2025-06-22 19:45:29.261987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:29.262667 | orchestrator | Sunday 22 June 2025 19:45:29 +0000 (0:00:00.187) 0:00:31.142 *********** 2025-06-22 19:45:29.439022 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:29.440908 | orchestrator | 2025-06-22 19:45:29.441330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:29.443175 | orchestrator | Sunday 22 June 2025 19:45:29 +0000 (0:00:00.180) 0:00:31.323 *********** 2025-06-22 19:45:29.620280 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:29.623121 | orchestrator | 2025-06-22 19:45:29.625091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:29.626759 | orchestrator | Sunday 22 June 2025 19:45:29 +0000 (0:00:00.180) 0:00:31.503 *********** 2025-06-22 19:45:29.802070 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:29.804394 | orchestrator | 2025-06-22 19:45:29.805499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:29.806433 | orchestrator | Sunday 22 June 2025 19:45:29 +0000 (0:00:00.182) 0:00:31.685 *********** 2025-06-22 19:45:29.991217 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:29.992918 | orchestrator | 2025-06-22 19:45:29.994286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:29.995606 | orchestrator | Sunday 22 June 2025 19:45:29 +0000 (0:00:00.188) 0:00:31.874 *********** 2025-06-22 19:45:30.175497 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:30.179579 | orchestrator | 2025-06-22 19:45:30.179621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:30.179635 | orchestrator | Sunday 22 June 2025 19:45:30 +0000 (0:00:00.185) 0:00:32.059 *********** 2025-06-22 19:45:30.759662 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157) 2025-06-22 19:45:30.759804 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157) 2025-06-22 19:45:30.761106 | orchestrator | 2025-06-22 19:45:30.761963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:30.762835 | orchestrator | Sunday 22 June 2025 19:45:30 +0000 (0:00:00.579) 0:00:32.639 *********** 2025-06-22 19:45:31.461702 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b) 2025-06-22 19:45:31.464459 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b) 2025-06-22 19:45:31.464993 | orchestrator | 2025-06-22 19:45:31.466216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:31.467259 | orchestrator | Sunday 22 June 2025 19:45:31 +0000 (0:00:00.705) 0:00:33.345 *********** 2025-06-22 19:45:31.870519 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d64e86dd-c29d-4edc-bf55-6282aedab238) 2025-06-22 19:45:31.875668 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d64e86dd-c29d-4edc-bf55-6282aedab238) 2025-06-22 19:45:31.876289 | orchestrator | 2025-06-22 19:45:31.878611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:31.882483 | orchestrator | Sunday 22 June 2025 19:45:31 +0000 (0:00:00.407) 0:00:33.752 *********** 2025-06-22 19:45:32.283703 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a34530e6-164e-4284-ba94-1682f51170e6) 2025-06-22 19:45:32.284169 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a34530e6-164e-4284-ba94-1682f51170e6) 2025-06-22 19:45:32.288860 | orchestrator | 2025-06-22 19:45:32.289549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:45:32.290147 | orchestrator | Sunday 22 June 2025 19:45:32 +0000 (0:00:00.409) 0:00:34.161 *********** 2025-06-22 19:45:32.620477 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:45:32.621286 | orchestrator | 2025-06-22 19:45:32.622287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:32.623001 | orchestrator | Sunday 22 June 2025 19:45:32 +0000 (0:00:00.341) 0:00:34.502 *********** 2025-06-22 19:45:33.016577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-22 19:45:33.017236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-22 19:45:33.019487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-22 19:45:33.024075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-22 19:45:33.025106 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-22 19:45:33.026486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-22 19:45:33.027654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-22 19:45:33.028693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-22 19:45:33.029627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-22 19:45:33.030569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-22 19:45:33.031533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-22 19:45:33.032368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-22 19:45:33.034332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-22 19:45:33.034507 | orchestrator | 2025-06-22 19:45:33.035231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:33.035910 | orchestrator | Sunday 22 June 2025 19:45:33 +0000 (0:00:00.397) 0:00:34.900 *********** 2025-06-22 19:45:33.246231 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:33.247128 | orchestrator | 2025-06-22 19:45:33.247687 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:33.248546 | orchestrator | Sunday 22 June 2025 19:45:33 +0000 (0:00:00.224) 0:00:35.125 *********** 2025-06-22 19:45:33.461943 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:33.464052 | orchestrator | 2025-06-22 19:45:33.464928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:33.465590 | orchestrator | Sunday 22 June 2025 19:45:33 +0000 (0:00:00.219) 0:00:35.344 *********** 2025-06-22 19:45:33.665654 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:33.667722 | orchestrator | 2025-06-22 19:45:33.669699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:33.670555 | orchestrator | Sunday 22 June 2025 19:45:33 +0000 (0:00:00.204) 0:00:35.549 *********** 2025-06-22 19:45:33.852729 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:33.854944 | orchestrator | 2025-06-22 19:45:33.857688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:33.858776 | orchestrator | Sunday 22 June 2025 19:45:33 +0000 (0:00:00.187) 0:00:35.737 *********** 2025-06-22 19:45:34.036520 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:34.039948 | orchestrator | 2025-06-22 19:45:34.040956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:34.042108 | orchestrator | Sunday 22 June 2025 19:45:34 +0000 (0:00:00.181) 0:00:35.919 *********** 2025-06-22 19:45:34.552963 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:34.557708 | orchestrator | 2025-06-22 19:45:34.558283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:34.559324 | orchestrator | Sunday 22 June 2025 19:45:34 +0000 (0:00:00.517) 0:00:36.437 *********** 2025-06-22 19:45:34.743329 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:34.747898 | orchestrator | 2025-06-22 19:45:34.748165 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:34.749553 | orchestrator | Sunday 22 June 2025 19:45:34 +0000 (0:00:00.189) 0:00:36.627 *********** 2025-06-22 19:45:34.931133 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:34.932762 | orchestrator | 2025-06-22 19:45:34.934640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:34.938272 | orchestrator | Sunday 22 June 2025 19:45:34 +0000 (0:00:00.186) 0:00:36.814 *********** 2025-06-22 19:45:35.555439 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-22 19:45:35.557335 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-22 19:45:35.560613 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-22 19:45:35.561185 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-22 19:45:35.561934 | orchestrator | 2025-06-22 19:45:35.564659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:35.564955 | orchestrator | Sunday 22 June 2025 19:45:35 +0000 (0:00:00.626) 0:00:37.440 *********** 2025-06-22 19:45:35.734013 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:35.735690 | orchestrator | 2025-06-22 19:45:35.736333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:35.737173 | orchestrator | Sunday 22 June 2025 19:45:35 +0000 (0:00:00.178) 0:00:37.618 *********** 2025-06-22 19:45:35.922386 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:35.923031 | orchestrator | 2025-06-22 19:45:35.923062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:35.923078 | orchestrator | Sunday 22 June 2025 19:45:35 +0000 (0:00:00.189) 0:00:37.808 *********** 2025-06-22 19:45:36.102069 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:36.102754 | orchestrator | 2025-06-22 19:45:36.103301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:45:36.103977 | orchestrator | Sunday 22 June 2025 19:45:36 +0000 (0:00:00.179) 0:00:37.987 *********** 2025-06-22 19:45:36.274994 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:36.275072 | orchestrator | 2025-06-22 19:45:36.275245 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-22 19:45:36.278427 | orchestrator | Sunday 22 June 2025 19:45:36 +0000 (0:00:00.169) 0:00:38.156 *********** 2025-06-22 19:45:36.428255 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-22 19:45:36.428538 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-22 19:45:36.429130 | orchestrator | 2025-06-22 19:45:36.430493 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-22 19:45:36.431751 | orchestrator | Sunday 22 June 2025 19:45:36 +0000 (0:00:00.154) 0:00:38.311 *********** 2025-06-22 19:45:36.557102 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:36.558742 | orchestrator | 2025-06-22 19:45:36.560227 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-22 19:45:36.561519 | orchestrator | Sunday 22 June 2025 19:45:36 +0000 (0:00:00.127) 0:00:38.439 *********** 2025-06-22 19:45:36.664126 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:36.664215 | orchestrator | 2025-06-22 19:45:36.665014 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-22 19:45:36.665859 | orchestrator | Sunday 22 June 2025 19:45:36 +0000 (0:00:00.107) 0:00:38.546 *********** 2025-06-22 19:45:36.786842 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:36.788342 | orchestrator | 2025-06-22 19:45:36.789902 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-22 19:45:36.790381 | orchestrator | Sunday 22 June 2025 19:45:36 +0000 (0:00:00.125) 0:00:38.671 *********** 2025-06-22 19:45:37.084625 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:45:37.087763 | orchestrator | 2025-06-22 19:45:37.087927 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-22 19:45:37.088522 | orchestrator | Sunday 22 June 2025 19:45:37 +0000 (0:00:00.296) 0:00:38.967 *********** 2025-06-22 19:45:37.245773 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '26b627d5-c9a2-5c9e-a2df-a450422a30c2'}}) 2025-06-22 19:45:37.253971 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f64325fb-298e-5c24-b96e-fd5d866c56eb'}}) 2025-06-22 19:45:37.254990 | orchestrator | 2025-06-22 19:45:37.255643 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-22 19:45:37.256274 | orchestrator | Sunday 22 June 2025 19:45:37 +0000 (0:00:00.160) 0:00:39.128 *********** 2025-06-22 19:45:37.401077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '26b627d5-c9a2-5c9e-a2df-a450422a30c2'}})  2025-06-22 19:45:37.406370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f64325fb-298e-5c24-b96e-fd5d866c56eb'}})  2025-06-22 19:45:37.406406 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:37.406420 | orchestrator | 2025-06-22 19:45:37.406432 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-22 19:45:37.406445 | orchestrator | Sunday 22 June 2025 19:45:37 +0000 (0:00:00.157) 0:00:39.285 *********** 2025-06-22 19:45:37.548120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '26b627d5-c9a2-5c9e-a2df-a450422a30c2'}})  2025-06-22 19:45:37.548961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f64325fb-298e-5c24-b96e-fd5d866c56eb'}})  2025-06-22 19:45:37.549875 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:37.550453 | orchestrator | 2025-06-22 19:45:37.550887 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-22 19:45:37.551202 | orchestrator | Sunday 22 June 2025 19:45:37 +0000 (0:00:00.148) 0:00:39.434 *********** 2025-06-22 19:45:37.688514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '26b627d5-c9a2-5c9e-a2df-a450422a30c2'}})  2025-06-22 19:45:37.692720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f64325fb-298e-5c24-b96e-fd5d866c56eb'}})  2025-06-22 19:45:37.692766 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:37.692779 | orchestrator | 2025-06-22 19:45:37.693184 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-22 19:45:37.694628 | orchestrator | Sunday 22 June 2025 19:45:37 +0000 (0:00:00.138) 0:00:39.573 *********** 2025-06-22 19:45:37.813255 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:45:37.813519 | orchestrator | 2025-06-22 19:45:37.816037 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-22 19:45:37.816549 | orchestrator | Sunday 22 June 2025 19:45:37 +0000 (0:00:00.126) 0:00:39.699 *********** 2025-06-22 19:45:37.928030 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:45:37.928256 | orchestrator | 2025-06-22 19:45:37.935587 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-22 19:45:37.937581 | orchestrator | Sunday 22 June 2025 19:45:37 +0000 (0:00:00.114) 0:00:39.813 *********** 2025-06-22 19:45:38.052592 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:38.052660 | orchestrator | 2025-06-22 19:45:38.054208 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-22 19:45:38.056445 | orchestrator | Sunday 22 June 2025 19:45:38 +0000 (0:00:00.121) 0:00:39.935 *********** 2025-06-22 19:45:38.179497 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:38.180193 | orchestrator | 2025-06-22 19:45:38.180505 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-22 19:45:38.180711 | orchestrator | Sunday 22 June 2025 19:45:38 +0000 (0:00:00.128) 0:00:40.064 *********** 2025-06-22 19:45:38.299683 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:38.300856 | orchestrator | 2025-06-22 19:45:38.304276 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-22 19:45:38.304303 | orchestrator | Sunday 22 June 2025 19:45:38 +0000 (0:00:00.120) 0:00:40.184 *********** 2025-06-22 19:45:38.452024 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:45:38.453457 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:45:38.453599 | orchestrator |  "sdb": { 2025-06-22 19:45:38.455486 | orchestrator |  "osd_lvm_uuid": "26b627d5-c9a2-5c9e-a2df-a450422a30c2" 2025-06-22 19:45:38.455695 | orchestrator |  }, 2025-06-22 19:45:38.455924 | orchestrator |  "sdc": { 2025-06-22 19:45:38.458315 | orchestrator |  "osd_lvm_uuid": "f64325fb-298e-5c24-b96e-fd5d866c56eb" 2025-06-22 19:45:38.458480 | orchestrator |  } 2025-06-22 19:45:38.458987 | orchestrator |  } 2025-06-22 19:45:38.459117 | orchestrator | } 2025-06-22 19:45:38.459454 | orchestrator | 2025-06-22 19:45:38.459644 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-22 19:45:38.460296 | orchestrator | Sunday 22 June 2025 19:45:38 +0000 (0:00:00.149) 0:00:40.334 *********** 2025-06-22 19:45:38.561033 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:38.561254 | orchestrator | 2025-06-22 19:45:38.562664 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-22 19:45:38.562692 | orchestrator | Sunday 22 June 2025 19:45:38 +0000 (0:00:00.111) 0:00:40.446 *********** 2025-06-22 19:45:38.869911 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:38.870135 | orchestrator | 2025-06-22 19:45:38.872001 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-22 19:45:38.872748 | orchestrator | Sunday 22 June 2025 19:45:38 +0000 (0:00:00.308) 0:00:40.754 *********** 2025-06-22 19:45:38.992768 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:38.992888 | orchestrator | 2025-06-22 19:45:38.993449 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-22 19:45:38.997400 | orchestrator | Sunday 22 June 2025 19:45:38 +0000 (0:00:00.122) 0:00:40.876 *********** 2025-06-22 19:45:39.205729 | orchestrator | changed: [testbed-node-5] => { 2025-06-22 19:45:39.206583 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-22 19:45:39.206636 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:45:39.206707 | orchestrator |  "sdb": { 2025-06-22 19:45:39.206723 | orchestrator |  "osd_lvm_uuid": "26b627d5-c9a2-5c9e-a2df-a450422a30c2" 2025-06-22 19:45:39.206749 | orchestrator |  }, 2025-06-22 19:45:39.206827 | orchestrator |  "sdc": { 2025-06-22 19:45:39.207600 | orchestrator |  "osd_lvm_uuid": "f64325fb-298e-5c24-b96e-fd5d866c56eb" 2025-06-22 19:45:39.208166 | orchestrator |  } 2025-06-22 19:45:39.209023 | orchestrator |  }, 2025-06-22 19:45:39.209648 | orchestrator |  "lvm_volumes": [ 2025-06-22 19:45:39.213321 | orchestrator |  { 2025-06-22 19:45:39.213345 | orchestrator |  "data": "osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2", 2025-06-22 19:45:39.213358 | orchestrator |  "data_vg": "ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2" 2025-06-22 19:45:39.213370 | orchestrator |  }, 2025-06-22 19:45:39.213381 | orchestrator |  { 2025-06-22 19:45:39.213392 | orchestrator |  "data": "osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb", 2025-06-22 19:45:39.214324 | orchestrator |  "data_vg": "ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb" 2025-06-22 19:45:39.214345 | orchestrator |  } 2025-06-22 19:45:39.214971 | orchestrator |  ] 2025-06-22 19:45:39.216279 | orchestrator |  } 2025-06-22 19:45:39.216919 | orchestrator | } 2025-06-22 19:45:39.217711 | orchestrator | 2025-06-22 19:45:39.218622 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-22 19:45:39.219228 | orchestrator | Sunday 22 June 2025 19:45:39 +0000 (0:00:00.211) 0:00:41.088 *********** 2025-06-22 19:45:40.182752 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-22 19:45:40.184471 | orchestrator | 2025-06-22 19:45:40.184505 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:45:40.185798 | orchestrator | 2025-06-22 19:45:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:45:40.186295 | orchestrator | 2025-06-22 19:45:40 | INFO  | Please wait and do not abort execution. 2025-06-22 19:45:40.188602 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 19:45:40.189877 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 19:45:40.190902 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 19:45:40.192589 | orchestrator | 2025-06-22 19:45:40.193413 | orchestrator | 2025-06-22 19:45:40.194173 | orchestrator | 2025-06-22 19:45:40.195562 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:45:40.196062 | orchestrator | Sunday 22 June 2025 19:45:40 +0000 (0:00:00.977) 0:00:42.065 *********** 2025-06-22 19:45:40.197115 | orchestrator | =============================================================================== 2025-06-22 19:45:40.197705 | orchestrator | Write configuration file ------------------------------------------------ 4.29s 2025-06-22 19:45:40.198465 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2025-06-22 19:45:40.199201 | orchestrator | Add known partitions to the list of available block devices ------------- 1.21s 2025-06-22 19:45:40.200196 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2025-06-22 19:45:40.200617 | orchestrator | Get initial list of available block devices ----------------------------- 1.05s 2025-06-22 19:45:40.201475 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.96s 2025-06-22 19:45:40.201953 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2025-06-22 19:45:40.202988 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.78s 2025-06-22 19:45:40.203547 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-06-22 19:45:40.204193 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-06-22 19:45:40.205041 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-06-22 19:45:40.205069 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.65s 2025-06-22 19:45:40.205506 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-06-22 19:45:40.205899 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-06-22 19:45:40.206599 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-06-22 19:45:40.206818 | orchestrator | Print DB devices -------------------------------------------------------- 0.62s 2025-06-22 19:45:40.207448 | orchestrator | Print configuration data ------------------------------------------------ 0.62s 2025-06-22 19:45:40.207765 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.61s 2025-06-22 19:45:40.208256 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-06-22 19:45:40.208477 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-06-22 19:45:52.498098 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:45:52.498206 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:45:52.498221 | orchestrator | Registering Redlock._release_script 2025-06-22 19:45:52.557478 | orchestrator | 2025-06-22 19:45:52 | INFO  | Task 1ec367d0-d7c7-4f13-8606-600537a8f13a (sync inventory) is running in background. Output coming soon. 2025-06-22 19:46:08.947961 | orchestrator | 2025-06-22 19:45:53 | INFO  | Starting group_vars file reorganization 2025-06-22 19:46:08.948062 | orchestrator | 2025-06-22 19:45:53 | INFO  | Moved 0 file(s) to their respective directories 2025-06-22 19:46:08.948077 | orchestrator | 2025-06-22 19:45:53 | INFO  | Group_vars file reorganization completed 2025-06-22 19:46:08.948088 | orchestrator | 2025-06-22 19:45:55 | INFO  | Starting variable preparation from inventory 2025-06-22 19:46:08.948100 | orchestrator | 2025-06-22 19:45:56 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-22 19:46:08.948111 | orchestrator | 2025-06-22 19:45:56 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-22 19:46:08.948143 | orchestrator | 2025-06-22 19:45:56 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-22 19:46:08.948155 | orchestrator | 2025-06-22 19:45:56 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-22 19:46:08.948166 | orchestrator | 2025-06-22 19:45:56 | INFO  | Variable preparation completed: 2025-06-22 19:46:08.948177 | orchestrator | 2025-06-22 19:45:57 | INFO  | Starting inventory overwrite handling 2025-06-22 19:46:08.948187 | orchestrator | 2025-06-22 19:45:57 | INFO  | Handling group overwrites in 99-overwrite 2025-06-22 19:46:08.948198 | orchestrator | 2025-06-22 19:45:57 | INFO  | Removing group frr:children from 60-generic 2025-06-22 19:46:08.948209 | orchestrator | 2025-06-22 19:45:57 | INFO  | Removing group storage:children from 50-kolla 2025-06-22 19:46:08.948219 | orchestrator | 2025-06-22 19:45:57 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-22 19:46:08.948237 | orchestrator | 2025-06-22 19:45:57 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-22 19:46:08.948249 | orchestrator | 2025-06-22 19:45:57 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-22 19:46:08.948259 | orchestrator | 2025-06-22 19:45:57 | INFO  | Handling group overwrites in 20-roles 2025-06-22 19:46:08.948270 | orchestrator | 2025-06-22 19:45:57 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-22 19:46:08.948280 | orchestrator | 2025-06-22 19:45:57 | INFO  | Removed 6 group(s) in total 2025-06-22 19:46:08.948291 | orchestrator | 2025-06-22 19:45:57 | INFO  | Inventory overwrite handling completed 2025-06-22 19:46:08.948302 | orchestrator | 2025-06-22 19:45:58 | INFO  | Starting merge of inventory files 2025-06-22 19:46:08.948312 | orchestrator | 2025-06-22 19:45:58 | INFO  | Inventory files merged successfully 2025-06-22 19:46:08.948323 | orchestrator | 2025-06-22 19:46:01 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-22 19:46:08.948334 | orchestrator | 2025-06-22 19:46:07 | INFO  | Successfully wrote ClusterShell configuration 2025-06-22 19:46:08.948345 | orchestrator | [master ec918a0] 2025-06-22-19-46 2025-06-22 19:46:08.948356 | orchestrator | 1 file changed, 30 insertions(+), 3 deletions(-) 2025-06-22 19:46:10.834217 | orchestrator | 2025-06-22 19:46:10 | INFO  | Task 6d8f4f62-8736-4ebd-816c-8cacdbf34b00 (ceph-create-lvm-devices) was prepared for execution. 2025-06-22 19:46:10.834326 | orchestrator | 2025-06-22 19:46:10 | INFO  | It takes a moment until task 6d8f4f62-8736-4ebd-816c-8cacdbf34b00 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-22 19:46:14.846122 | orchestrator | 2025-06-22 19:46:14.846966 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-22 19:46:14.847806 | orchestrator | 2025-06-22 19:46:14.849796 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:46:14.850776 | orchestrator | Sunday 22 June 2025 19:46:14 +0000 (0:00:00.230) 0:00:00.230 *********** 2025-06-22 19:46:15.055015 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 19:46:15.056195 | orchestrator | 2025-06-22 19:46:15.057054 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:46:15.057745 | orchestrator | Sunday 22 June 2025 19:46:15 +0000 (0:00:00.211) 0:00:00.442 *********** 2025-06-22 19:46:15.251847 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:15.252118 | orchestrator | 2025-06-22 19:46:15.252584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:15.252948 | orchestrator | Sunday 22 June 2025 19:46:15 +0000 (0:00:00.197) 0:00:00.639 *********** 2025-06-22 19:46:15.612062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-22 19:46:15.612603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-22 19:46:15.613740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-22 19:46:15.614948 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-22 19:46:15.615644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-22 19:46:15.616264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-22 19:46:15.617153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-22 19:46:15.617758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-22 19:46:15.618522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-22 19:46:15.619002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-22 19:46:15.619594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-22 19:46:15.620048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-22 19:46:15.620367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-22 19:46:15.620744 | orchestrator | 2025-06-22 19:46:15.621959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:15.622649 | orchestrator | Sunday 22 June 2025 19:46:15 +0000 (0:00:00.359) 0:00:00.999 *********** 2025-06-22 19:46:15.977807 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:15.978126 | orchestrator | 2025-06-22 19:46:15.979693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:15.980504 | orchestrator | Sunday 22 June 2025 19:46:15 +0000 (0:00:00.363) 0:00:01.363 *********** 2025-06-22 19:46:16.158890 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:16.159003 | orchestrator | 2025-06-22 19:46:16.159262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:16.159916 | orchestrator | Sunday 22 June 2025 19:46:16 +0000 (0:00:00.183) 0:00:01.546 *********** 2025-06-22 19:46:16.337196 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:16.337677 | orchestrator | 2025-06-22 19:46:16.338391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:16.338978 | orchestrator | Sunday 22 June 2025 19:46:16 +0000 (0:00:00.178) 0:00:01.724 *********** 2025-06-22 19:46:16.517261 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:16.517808 | orchestrator | 2025-06-22 19:46:16.518856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:16.519309 | orchestrator | Sunday 22 June 2025 19:46:16 +0000 (0:00:00.180) 0:00:01.904 *********** 2025-06-22 19:46:16.701722 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:16.703142 | orchestrator | 2025-06-22 19:46:16.704333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:16.705296 | orchestrator | Sunday 22 June 2025 19:46:16 +0000 (0:00:00.183) 0:00:02.087 *********** 2025-06-22 19:46:16.904492 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:16.904800 | orchestrator | 2025-06-22 19:46:16.904838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:16.905354 | orchestrator | Sunday 22 June 2025 19:46:16 +0000 (0:00:00.204) 0:00:02.292 *********** 2025-06-22 19:46:17.081739 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:17.081960 | orchestrator | 2025-06-22 19:46:17.082811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:17.082923 | orchestrator | Sunday 22 June 2025 19:46:17 +0000 (0:00:00.177) 0:00:02.469 *********** 2025-06-22 19:46:17.253882 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:17.254583 | orchestrator | 2025-06-22 19:46:17.255329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:17.255983 | orchestrator | Sunday 22 June 2025 19:46:17 +0000 (0:00:00.171) 0:00:02.641 *********** 2025-06-22 19:46:17.617859 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111) 2025-06-22 19:46:17.619683 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111) 2025-06-22 19:46:17.620607 | orchestrator | 2025-06-22 19:46:17.620679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:17.621651 | orchestrator | Sunday 22 June 2025 19:46:17 +0000 (0:00:00.363) 0:00:03.005 *********** 2025-06-22 19:46:18.005020 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f) 2025-06-22 19:46:18.006705 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f) 2025-06-22 19:46:18.007057 | orchestrator | 2025-06-22 19:46:18.007739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:18.008357 | orchestrator | Sunday 22 June 2025 19:46:17 +0000 (0:00:00.386) 0:00:03.391 *********** 2025-06-22 19:46:18.530340 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f3f79088-b1f4-4694-a8b0-38e1aef3e3c0) 2025-06-22 19:46:18.530973 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f3f79088-b1f4-4694-a8b0-38e1aef3e3c0) 2025-06-22 19:46:18.531661 | orchestrator | 2025-06-22 19:46:18.534365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:18.534891 | orchestrator | Sunday 22 June 2025 19:46:18 +0000 (0:00:00.526) 0:00:03.918 *********** 2025-06-22 19:46:19.080470 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d) 2025-06-22 19:46:19.080573 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d) 2025-06-22 19:46:19.081075 | orchestrator | 2025-06-22 19:46:19.082123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:19.083615 | orchestrator | Sunday 22 June 2025 19:46:19 +0000 (0:00:00.547) 0:00:04.465 *********** 2025-06-22 19:46:19.828331 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:46:19.829248 | orchestrator | 2025-06-22 19:46:19.830271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:19.830786 | orchestrator | Sunday 22 June 2025 19:46:19 +0000 (0:00:00.749) 0:00:05.214 *********** 2025-06-22 19:46:20.262464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-22 19:46:20.263614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-22 19:46:20.264839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-22 19:46:20.265805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-22 19:46:20.268353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-22 19:46:20.269326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-22 19:46:20.269789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-22 19:46:20.270828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-22 19:46:20.271314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-22 19:46:20.271922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-22 19:46:20.272420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-22 19:46:20.272867 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-22 19:46:20.273359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-22 19:46:20.273826 | orchestrator | 2025-06-22 19:46:20.274403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:20.274819 | orchestrator | Sunday 22 June 2025 19:46:20 +0000 (0:00:00.432) 0:00:05.647 *********** 2025-06-22 19:46:20.457421 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:20.457514 | orchestrator | 2025-06-22 19:46:20.458342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:20.459013 | orchestrator | Sunday 22 June 2025 19:46:20 +0000 (0:00:00.195) 0:00:05.843 *********** 2025-06-22 19:46:20.655695 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:20.656445 | orchestrator | 2025-06-22 19:46:20.657321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:20.658126 | orchestrator | Sunday 22 June 2025 19:46:20 +0000 (0:00:00.199) 0:00:06.042 *********** 2025-06-22 19:46:20.851082 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:20.851422 | orchestrator | 2025-06-22 19:46:20.852334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:20.853211 | orchestrator | Sunday 22 June 2025 19:46:20 +0000 (0:00:00.195) 0:00:06.237 *********** 2025-06-22 19:46:21.052464 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:21.052690 | orchestrator | 2025-06-22 19:46:21.054428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:21.054453 | orchestrator | Sunday 22 June 2025 19:46:21 +0000 (0:00:00.198) 0:00:06.436 *********** 2025-06-22 19:46:21.248837 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:21.249005 | orchestrator | 2025-06-22 19:46:21.249108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:21.249884 | orchestrator | Sunday 22 June 2025 19:46:21 +0000 (0:00:00.197) 0:00:06.634 *********** 2025-06-22 19:46:21.448851 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:21.449425 | orchestrator | 2025-06-22 19:46:21.451050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:21.451740 | orchestrator | Sunday 22 June 2025 19:46:21 +0000 (0:00:00.201) 0:00:06.835 *********** 2025-06-22 19:46:21.645718 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:21.645891 | orchestrator | 2025-06-22 19:46:21.646876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:21.647869 | orchestrator | Sunday 22 June 2025 19:46:21 +0000 (0:00:00.196) 0:00:07.031 *********** 2025-06-22 19:46:21.829385 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:21.830440 | orchestrator | 2025-06-22 19:46:21.831748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:21.833072 | orchestrator | Sunday 22 June 2025 19:46:21 +0000 (0:00:00.184) 0:00:07.216 *********** 2025-06-22 19:46:22.898676 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-22 19:46:22.899598 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-22 19:46:22.900488 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-22 19:46:22.901088 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-22 19:46:22.902221 | orchestrator | 2025-06-22 19:46:22.903134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:22.903670 | orchestrator | Sunday 22 June 2025 19:46:22 +0000 (0:00:01.067) 0:00:08.283 *********** 2025-06-22 19:46:23.091885 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:23.092113 | orchestrator | 2025-06-22 19:46:23.092719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:23.093219 | orchestrator | Sunday 22 June 2025 19:46:23 +0000 (0:00:00.193) 0:00:08.477 *********** 2025-06-22 19:46:23.281855 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:23.282943 | orchestrator | 2025-06-22 19:46:23.283762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:23.284489 | orchestrator | Sunday 22 June 2025 19:46:23 +0000 (0:00:00.191) 0:00:08.668 *********** 2025-06-22 19:46:23.471435 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:23.471613 | orchestrator | 2025-06-22 19:46:23.472832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:23.473020 | orchestrator | Sunday 22 June 2025 19:46:23 +0000 (0:00:00.189) 0:00:08.858 *********** 2025-06-22 19:46:23.668981 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:23.669083 | orchestrator | 2025-06-22 19:46:23.669354 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-22 19:46:23.669897 | orchestrator | Sunday 22 June 2025 19:46:23 +0000 (0:00:00.197) 0:00:09.055 *********** 2025-06-22 19:46:23.800489 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:23.801318 | orchestrator | 2025-06-22 19:46:23.801990 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-22 19:46:23.803198 | orchestrator | Sunday 22 June 2025 19:46:23 +0000 (0:00:00.131) 0:00:09.186 *********** 2025-06-22 19:46:23.984180 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f4df137-04dd-5f0e-acd7-f62ec38375b4'}}) 2025-06-22 19:46:23.985095 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5c0aa592-9340-5775-8ceb-7aef1759a79b'}}) 2025-06-22 19:46:23.985854 | orchestrator | 2025-06-22 19:46:23.986556 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-22 19:46:23.987161 | orchestrator | Sunday 22 June 2025 19:46:23 +0000 (0:00:00.184) 0:00:09.371 *********** 2025-06-22 19:46:25.852408 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'}) 2025-06-22 19:46:25.852581 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'}) 2025-06-22 19:46:25.853766 | orchestrator | 2025-06-22 19:46:25.854280 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-22 19:46:25.855546 | orchestrator | Sunday 22 June 2025 19:46:25 +0000 (0:00:01.866) 0:00:11.237 *********** 2025-06-22 19:46:26.010148 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:26.011157 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:26.012030 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:26.012948 | orchestrator | 2025-06-22 19:46:26.014241 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-22 19:46:26.014269 | orchestrator | Sunday 22 June 2025 19:46:26 +0000 (0:00:00.158) 0:00:11.396 *********** 2025-06-22 19:46:27.431467 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'}) 2025-06-22 19:46:27.431770 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'}) 2025-06-22 19:46:27.433354 | orchestrator | 2025-06-22 19:46:27.434614 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-22 19:46:27.436085 | orchestrator | Sunday 22 June 2025 19:46:27 +0000 (0:00:01.420) 0:00:12.816 *********** 2025-06-22 19:46:27.585448 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:27.585846 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:27.586509 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:27.587056 | orchestrator | 2025-06-22 19:46:27.589113 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-22 19:46:27.589877 | orchestrator | Sunday 22 June 2025 19:46:27 +0000 (0:00:00.155) 0:00:12.972 *********** 2025-06-22 19:46:27.703983 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:27.704375 | orchestrator | 2025-06-22 19:46:27.705550 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-22 19:46:27.706303 | orchestrator | Sunday 22 June 2025 19:46:27 +0000 (0:00:00.118) 0:00:13.091 *********** 2025-06-22 19:46:28.056492 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:28.056854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:28.058267 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:28.059291 | orchestrator | 2025-06-22 19:46:28.060422 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-22 19:46:28.060867 | orchestrator | Sunday 22 June 2025 19:46:28 +0000 (0:00:00.351) 0:00:13.442 *********** 2025-06-22 19:46:28.191774 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:28.191958 | orchestrator | 2025-06-22 19:46:28.192940 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-22 19:46:28.193755 | orchestrator | Sunday 22 June 2025 19:46:28 +0000 (0:00:00.135) 0:00:13.577 *********** 2025-06-22 19:46:28.348339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:28.348801 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:28.349987 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:28.351129 | orchestrator | 2025-06-22 19:46:28.351836 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-22 19:46:28.353738 | orchestrator | Sunday 22 June 2025 19:46:28 +0000 (0:00:00.157) 0:00:13.735 *********** 2025-06-22 19:46:28.487559 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:28.489216 | orchestrator | 2025-06-22 19:46:28.490303 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-22 19:46:28.491213 | orchestrator | Sunday 22 June 2025 19:46:28 +0000 (0:00:00.138) 0:00:13.873 *********** 2025-06-22 19:46:28.644951 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:28.645841 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:28.646696 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:28.647996 | orchestrator | 2025-06-22 19:46:28.649430 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-22 19:46:28.650141 | orchestrator | Sunday 22 June 2025 19:46:28 +0000 (0:00:00.157) 0:00:14.031 *********** 2025-06-22 19:46:28.782540 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:28.782880 | orchestrator | 2025-06-22 19:46:28.784397 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-22 19:46:28.785752 | orchestrator | Sunday 22 June 2025 19:46:28 +0000 (0:00:00.137) 0:00:14.169 *********** 2025-06-22 19:46:28.939181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:28.939641 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:28.940507 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:28.941721 | orchestrator | 2025-06-22 19:46:28.943070 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-22 19:46:28.943719 | orchestrator | Sunday 22 June 2025 19:46:28 +0000 (0:00:00.154) 0:00:14.324 *********** 2025-06-22 19:46:29.092794 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:29.092852 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:29.093599 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:29.094434 | orchestrator | 2025-06-22 19:46:29.094801 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-22 19:46:29.095673 | orchestrator | Sunday 22 June 2025 19:46:29 +0000 (0:00:00.154) 0:00:14.479 *********** 2025-06-22 19:46:29.236561 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:29.238143 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:29.240283 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:29.240979 | orchestrator | 2025-06-22 19:46:29.242291 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-22 19:46:29.243431 | orchestrator | Sunday 22 June 2025 19:46:29 +0000 (0:00:00.142) 0:00:14.621 *********** 2025-06-22 19:46:29.370298 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:29.376186 | orchestrator | 2025-06-22 19:46:29.376824 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-22 19:46:29.378142 | orchestrator | Sunday 22 June 2025 19:46:29 +0000 (0:00:00.135) 0:00:14.756 *********** 2025-06-22 19:46:29.501040 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:29.501240 | orchestrator | 2025-06-22 19:46:29.502437 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-22 19:46:29.502806 | orchestrator | Sunday 22 June 2025 19:46:29 +0000 (0:00:00.131) 0:00:14.887 *********** 2025-06-22 19:46:29.638409 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:29.639474 | orchestrator | 2025-06-22 19:46:29.639853 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-22 19:46:29.640773 | orchestrator | Sunday 22 June 2025 19:46:29 +0000 (0:00:00.137) 0:00:15.025 *********** 2025-06-22 19:46:29.979328 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:46:29.979451 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-22 19:46:29.981906 | orchestrator | } 2025-06-22 19:46:29.982330 | orchestrator | 2025-06-22 19:46:29.983438 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-22 19:46:29.983885 | orchestrator | Sunday 22 June 2025 19:46:29 +0000 (0:00:00.338) 0:00:15.364 *********** 2025-06-22 19:46:30.130261 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:46:30.130611 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-22 19:46:30.132334 | orchestrator | } 2025-06-22 19:46:30.132719 | orchestrator | 2025-06-22 19:46:30.133615 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-22 19:46:30.134370 | orchestrator | Sunday 22 June 2025 19:46:30 +0000 (0:00:00.152) 0:00:15.517 *********** 2025-06-22 19:46:30.272338 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:46:30.273504 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-22 19:46:30.274586 | orchestrator | } 2025-06-22 19:46:30.276090 | orchestrator | 2025-06-22 19:46:30.277689 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-22 19:46:30.278947 | orchestrator | Sunday 22 June 2025 19:46:30 +0000 (0:00:00.141) 0:00:15.658 *********** 2025-06-22 19:46:30.914821 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:30.915079 | orchestrator | 2025-06-22 19:46:30.916142 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-22 19:46:30.917550 | orchestrator | Sunday 22 June 2025 19:46:30 +0000 (0:00:00.641) 0:00:16.300 *********** 2025-06-22 19:46:31.385790 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:31.386011 | orchestrator | 2025-06-22 19:46:31.386484 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-22 19:46:31.387242 | orchestrator | Sunday 22 June 2025 19:46:31 +0000 (0:00:00.469) 0:00:16.770 *********** 2025-06-22 19:46:31.865158 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:31.865564 | orchestrator | 2025-06-22 19:46:31.866091 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-22 19:46:31.866703 | orchestrator | Sunday 22 June 2025 19:46:31 +0000 (0:00:00.481) 0:00:17.251 *********** 2025-06-22 19:46:32.012915 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:32.013537 | orchestrator | 2025-06-22 19:46:32.014058 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-22 19:46:32.014810 | orchestrator | Sunday 22 June 2025 19:46:32 +0000 (0:00:00.147) 0:00:17.398 *********** 2025-06-22 19:46:32.128005 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:32.128177 | orchestrator | 2025-06-22 19:46:32.128486 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-22 19:46:32.129325 | orchestrator | Sunday 22 June 2025 19:46:32 +0000 (0:00:00.115) 0:00:17.514 *********** 2025-06-22 19:46:32.240601 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:32.240755 | orchestrator | 2025-06-22 19:46:32.241169 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-22 19:46:32.242132 | orchestrator | Sunday 22 June 2025 19:46:32 +0000 (0:00:00.112) 0:00:17.627 *********** 2025-06-22 19:46:32.387780 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:46:32.388839 | orchestrator |  "vgs_report": { 2025-06-22 19:46:32.389328 | orchestrator |  "vg": [] 2025-06-22 19:46:32.391961 | orchestrator |  } 2025-06-22 19:46:32.391992 | orchestrator | } 2025-06-22 19:46:32.392825 | orchestrator | 2025-06-22 19:46:32.393700 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-22 19:46:32.394716 | orchestrator | Sunday 22 June 2025 19:46:32 +0000 (0:00:00.146) 0:00:17.773 *********** 2025-06-22 19:46:32.527034 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:32.527656 | orchestrator | 2025-06-22 19:46:32.527908 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-22 19:46:32.529287 | orchestrator | Sunday 22 June 2025 19:46:32 +0000 (0:00:00.139) 0:00:17.913 *********** 2025-06-22 19:46:32.673792 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:32.674370 | orchestrator | 2025-06-22 19:46:32.675443 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-22 19:46:32.676482 | orchestrator | Sunday 22 June 2025 19:46:32 +0000 (0:00:00.146) 0:00:18.060 *********** 2025-06-22 19:46:33.019014 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:33.019469 | orchestrator | 2025-06-22 19:46:33.020711 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-22 19:46:33.021710 | orchestrator | Sunday 22 June 2025 19:46:33 +0000 (0:00:00.344) 0:00:18.404 *********** 2025-06-22 19:46:33.154802 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:33.155697 | orchestrator | 2025-06-22 19:46:33.157662 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-22 19:46:33.159385 | orchestrator | Sunday 22 June 2025 19:46:33 +0000 (0:00:00.137) 0:00:18.541 *********** 2025-06-22 19:46:33.300009 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:33.301281 | orchestrator | 2025-06-22 19:46:33.302728 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-22 19:46:33.303546 | orchestrator | Sunday 22 June 2025 19:46:33 +0000 (0:00:00.142) 0:00:18.684 *********** 2025-06-22 19:46:33.440845 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:33.440999 | orchestrator | 2025-06-22 19:46:33.441460 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-22 19:46:33.441891 | orchestrator | Sunday 22 June 2025 19:46:33 +0000 (0:00:00.143) 0:00:18.828 *********** 2025-06-22 19:46:33.572715 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:33.573382 | orchestrator | 2025-06-22 19:46:33.574523 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-22 19:46:33.575126 | orchestrator | Sunday 22 June 2025 19:46:33 +0000 (0:00:00.131) 0:00:18.959 *********** 2025-06-22 19:46:33.712515 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:33.713088 | orchestrator | 2025-06-22 19:46:33.714380 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-22 19:46:33.714587 | orchestrator | Sunday 22 June 2025 19:46:33 +0000 (0:00:00.138) 0:00:19.098 *********** 2025-06-22 19:46:33.843022 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:33.844424 | orchestrator | 2025-06-22 19:46:33.845633 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-22 19:46:33.846662 | orchestrator | Sunday 22 June 2025 19:46:33 +0000 (0:00:00.131) 0:00:19.229 *********** 2025-06-22 19:46:33.981599 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:33.982503 | orchestrator | 2025-06-22 19:46:33.983387 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-22 19:46:33.984251 | orchestrator | Sunday 22 June 2025 19:46:33 +0000 (0:00:00.139) 0:00:19.368 *********** 2025-06-22 19:46:34.116628 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:34.119190 | orchestrator | 2025-06-22 19:46:34.120118 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-22 19:46:34.120615 | orchestrator | Sunday 22 June 2025 19:46:34 +0000 (0:00:00.129) 0:00:19.498 *********** 2025-06-22 19:46:34.245123 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:34.246001 | orchestrator | 2025-06-22 19:46:34.247468 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-22 19:46:34.249034 | orchestrator | Sunday 22 June 2025 19:46:34 +0000 (0:00:00.132) 0:00:19.631 *********** 2025-06-22 19:46:34.407187 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:34.408808 | orchestrator | 2025-06-22 19:46:34.410712 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-22 19:46:34.411554 | orchestrator | Sunday 22 June 2025 19:46:34 +0000 (0:00:00.161) 0:00:19.792 *********** 2025-06-22 19:46:34.540011 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:34.543366 | orchestrator | 2025-06-22 19:46:34.544181 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-22 19:46:34.545178 | orchestrator | Sunday 22 June 2025 19:46:34 +0000 (0:00:00.132) 0:00:19.924 *********** 2025-06-22 19:46:34.697125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:34.698556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:34.701315 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:34.701345 | orchestrator | 2025-06-22 19:46:34.701571 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-22 19:46:34.702098 | orchestrator | Sunday 22 June 2025 19:46:34 +0000 (0:00:00.157) 0:00:20.082 *********** 2025-06-22 19:46:35.076456 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:35.076662 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:35.078745 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:35.079728 | orchestrator | 2025-06-22 19:46:35.080801 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-22 19:46:35.081661 | orchestrator | Sunday 22 June 2025 19:46:35 +0000 (0:00:00.378) 0:00:20.461 *********** 2025-06-22 19:46:35.235266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:35.236236 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:35.237050 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:35.238323 | orchestrator | 2025-06-22 19:46:35.239564 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-22 19:46:35.240775 | orchestrator | Sunday 22 June 2025 19:46:35 +0000 (0:00:00.160) 0:00:20.621 *********** 2025-06-22 19:46:35.403718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:35.404014 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:35.405178 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:35.405644 | orchestrator | 2025-06-22 19:46:35.406475 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-22 19:46:35.407305 | orchestrator | Sunday 22 June 2025 19:46:35 +0000 (0:00:00.169) 0:00:20.790 *********** 2025-06-22 19:46:35.556386 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:35.557442 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:35.560438 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:35.560632 | orchestrator | 2025-06-22 19:46:35.562358 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-22 19:46:35.562737 | orchestrator | Sunday 22 June 2025 19:46:35 +0000 (0:00:00.150) 0:00:20.941 *********** 2025-06-22 19:46:35.710086 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:35.711354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:35.712351 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:35.713061 | orchestrator | 2025-06-22 19:46:35.714644 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-22 19:46:35.714669 | orchestrator | Sunday 22 June 2025 19:46:35 +0000 (0:00:00.154) 0:00:21.096 *********** 2025-06-22 19:46:35.861880 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:35.862998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:35.864346 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:35.865491 | orchestrator | 2025-06-22 19:46:35.866445 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-22 19:46:35.868584 | orchestrator | Sunday 22 June 2025 19:46:35 +0000 (0:00:00.151) 0:00:21.247 *********** 2025-06-22 19:46:36.013446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:36.014720 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:36.016037 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:36.017852 | orchestrator | 2025-06-22 19:46:36.017876 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-22 19:46:36.018281 | orchestrator | Sunday 22 June 2025 19:46:36 +0000 (0:00:00.152) 0:00:21.400 *********** 2025-06-22 19:46:36.489400 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:36.490139 | orchestrator | 2025-06-22 19:46:36.491405 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-22 19:46:36.493140 | orchestrator | Sunday 22 June 2025 19:46:36 +0000 (0:00:00.475) 0:00:21.876 *********** 2025-06-22 19:46:36.986143 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:36.986367 | orchestrator | 2025-06-22 19:46:36.986911 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-22 19:46:36.987989 | orchestrator | Sunday 22 June 2025 19:46:36 +0000 (0:00:00.496) 0:00:22.372 *********** 2025-06-22 19:46:37.137361 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:37.138271 | orchestrator | 2025-06-22 19:46:37.139019 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-22 19:46:37.139815 | orchestrator | Sunday 22 June 2025 19:46:37 +0000 (0:00:00.150) 0:00:22.523 *********** 2025-06-22 19:46:37.295884 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'vg_name': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'}) 2025-06-22 19:46:37.296745 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'vg_name': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'}) 2025-06-22 19:46:37.298209 | orchestrator | 2025-06-22 19:46:37.298802 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-22 19:46:37.299799 | orchestrator | Sunday 22 June 2025 19:46:37 +0000 (0:00:00.159) 0:00:22.682 *********** 2025-06-22 19:46:37.471113 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:37.472020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:37.472396 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:37.473218 | orchestrator | 2025-06-22 19:46:37.474075 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-22 19:46:37.475036 | orchestrator | Sunday 22 June 2025 19:46:37 +0000 (0:00:00.174) 0:00:22.857 *********** 2025-06-22 19:46:37.873172 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:37.873955 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:37.874604 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:37.875530 | orchestrator | 2025-06-22 19:46:37.876536 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-22 19:46:37.876835 | orchestrator | Sunday 22 June 2025 19:46:37 +0000 (0:00:00.402) 0:00:23.259 *********** 2025-06-22 19:46:38.033355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'})  2025-06-22 19:46:38.033529 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'})  2025-06-22 19:46:38.033741 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:38.034654 | orchestrator | 2025-06-22 19:46:38.034979 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-22 19:46:38.035507 | orchestrator | Sunday 22 June 2025 19:46:38 +0000 (0:00:00.160) 0:00:23.420 *********** 2025-06-22 19:46:38.313564 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:46:38.314084 | orchestrator |  "lvm_report": { 2025-06-22 19:46:38.316325 | orchestrator |  "lv": [ 2025-06-22 19:46:38.317710 | orchestrator |  { 2025-06-22 19:46:38.317887 | orchestrator |  "lv_name": "osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b", 2025-06-22 19:46:38.318534 | orchestrator |  "vg_name": "ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b" 2025-06-22 19:46:38.319444 | orchestrator |  }, 2025-06-22 19:46:38.319911 | orchestrator |  { 2025-06-22 19:46:38.320483 | orchestrator |  "lv_name": "osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4", 2025-06-22 19:46:38.321170 | orchestrator |  "vg_name": "ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4" 2025-06-22 19:46:38.322610 | orchestrator |  } 2025-06-22 19:46:38.323559 | orchestrator |  ], 2025-06-22 19:46:38.324519 | orchestrator |  "pv": [ 2025-06-22 19:46:38.325247 | orchestrator |  { 2025-06-22 19:46:38.326087 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-22 19:46:38.326391 | orchestrator |  "vg_name": "ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4" 2025-06-22 19:46:38.327104 | orchestrator |  }, 2025-06-22 19:46:38.327635 | orchestrator |  { 2025-06-22 19:46:38.328155 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-22 19:46:38.328843 | orchestrator |  "vg_name": "ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b" 2025-06-22 19:46:38.329770 | orchestrator |  } 2025-06-22 19:46:38.330645 | orchestrator |  ] 2025-06-22 19:46:38.332407 | orchestrator |  } 2025-06-22 19:46:38.334639 | orchestrator | } 2025-06-22 19:46:38.334663 | orchestrator | 2025-06-22 19:46:38.334674 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-22 19:46:38.334684 | orchestrator | 2025-06-22 19:46:38.334693 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:46:38.334760 | orchestrator | Sunday 22 June 2025 19:46:38 +0000 (0:00:00.279) 0:00:23.699 *********** 2025-06-22 19:46:38.570817 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-22 19:46:38.574911 | orchestrator | 2025-06-22 19:46:38.574988 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:46:38.575012 | orchestrator | Sunday 22 June 2025 19:46:38 +0000 (0:00:00.257) 0:00:23.957 *********** 2025-06-22 19:46:38.807892 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:46:38.808193 | orchestrator | 2025-06-22 19:46:38.811470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:38.814796 | orchestrator | Sunday 22 June 2025 19:46:38 +0000 (0:00:00.237) 0:00:24.194 *********** 2025-06-22 19:46:39.194396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-22 19:46:39.195452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-22 19:46:39.196377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-22 19:46:39.197792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-22 19:46:39.198163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-22 19:46:39.199210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-22 19:46:39.199966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-22 19:46:39.200664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-22 19:46:39.201312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-22 19:46:39.202084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-22 19:46:39.202862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-22 19:46:39.203419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-22 19:46:39.203771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-22 19:46:39.204321 | orchestrator | 2025-06-22 19:46:39.204806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:39.205349 | orchestrator | Sunday 22 June 2025 19:46:39 +0000 (0:00:00.385) 0:00:24.580 *********** 2025-06-22 19:46:39.384153 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:39.384541 | orchestrator | 2025-06-22 19:46:39.385105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:39.386285 | orchestrator | Sunday 22 June 2025 19:46:39 +0000 (0:00:00.190) 0:00:24.771 *********** 2025-06-22 19:46:39.591188 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:39.592194 | orchestrator | 2025-06-22 19:46:39.593002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:39.595049 | orchestrator | Sunday 22 June 2025 19:46:39 +0000 (0:00:00.205) 0:00:24.976 *********** 2025-06-22 19:46:39.792242 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:39.793529 | orchestrator | 2025-06-22 19:46:39.794439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:39.795486 | orchestrator | Sunday 22 June 2025 19:46:39 +0000 (0:00:00.203) 0:00:25.179 *********** 2025-06-22 19:46:40.445767 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:40.446915 | orchestrator | 2025-06-22 19:46:40.447534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:40.448195 | orchestrator | Sunday 22 June 2025 19:46:40 +0000 (0:00:00.649) 0:00:25.828 *********** 2025-06-22 19:46:40.648345 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:40.648848 | orchestrator | 2025-06-22 19:46:40.649621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:40.650317 | orchestrator | Sunday 22 June 2025 19:46:40 +0000 (0:00:00.205) 0:00:26.034 *********** 2025-06-22 19:46:40.852696 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:40.853663 | orchestrator | 2025-06-22 19:46:40.854147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:40.854831 | orchestrator | Sunday 22 June 2025 19:46:40 +0000 (0:00:00.204) 0:00:26.239 *********** 2025-06-22 19:46:41.058306 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:41.058754 | orchestrator | 2025-06-22 19:46:41.059896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:41.060595 | orchestrator | Sunday 22 June 2025 19:46:41 +0000 (0:00:00.205) 0:00:26.445 *********** 2025-06-22 19:46:41.243145 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:41.244174 | orchestrator | 2025-06-22 19:46:41.244931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:41.245849 | orchestrator | Sunday 22 June 2025 19:46:41 +0000 (0:00:00.184) 0:00:26.629 *********** 2025-06-22 19:46:41.658161 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354) 2025-06-22 19:46:41.659497 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354) 2025-06-22 19:46:41.660372 | orchestrator | 2025-06-22 19:46:41.661563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:41.662087 | orchestrator | Sunday 22 June 2025 19:46:41 +0000 (0:00:00.414) 0:00:27.044 *********** 2025-06-22 19:46:42.068825 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a6d93ccb-5091-4fbc-bc32-8344f81d146e) 2025-06-22 19:46:42.070876 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a6d93ccb-5091-4fbc-bc32-8344f81d146e) 2025-06-22 19:46:42.072888 | orchestrator | 2025-06-22 19:46:42.073554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:42.074631 | orchestrator | Sunday 22 June 2025 19:46:42 +0000 (0:00:00.409) 0:00:27.453 *********** 2025-06-22 19:46:42.489076 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4258f07d-32b4-4c40-a297-43ff401da985) 2025-06-22 19:46:42.489539 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4258f07d-32b4-4c40-a297-43ff401da985) 2025-06-22 19:46:42.490524 | orchestrator | 2025-06-22 19:46:42.491020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:42.492839 | orchestrator | Sunday 22 June 2025 19:46:42 +0000 (0:00:00.420) 0:00:27.873 *********** 2025-06-22 19:46:42.918907 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_626bb53b-03fa-4cf2-9c74-01c88e74436c) 2025-06-22 19:46:42.919797 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_626bb53b-03fa-4cf2-9c74-01c88e74436c) 2025-06-22 19:46:42.921011 | orchestrator | 2025-06-22 19:46:42.922176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:42.923582 | orchestrator | Sunday 22 June 2025 19:46:42 +0000 (0:00:00.432) 0:00:28.305 *********** 2025-06-22 19:46:43.247803 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:46:43.247910 | orchestrator | 2025-06-22 19:46:43.248251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:43.248741 | orchestrator | Sunday 22 June 2025 19:46:43 +0000 (0:00:00.327) 0:00:28.633 *********** 2025-06-22 19:46:43.906448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-22 19:46:43.906736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-22 19:46:43.910167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-22 19:46:43.911016 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-22 19:46:43.911493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-22 19:46:43.911922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-22 19:46:43.912438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-22 19:46:43.912850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-22 19:46:43.914607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-22 19:46:43.916321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-22 19:46:43.916346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-22 19:46:43.916358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-22 19:46:43.916370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-22 19:46:43.916706 | orchestrator | 2025-06-22 19:46:43.917260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:43.917505 | orchestrator | Sunday 22 June 2025 19:46:43 +0000 (0:00:00.659) 0:00:29.293 *********** 2025-06-22 19:46:44.116462 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:44.116849 | orchestrator | 2025-06-22 19:46:44.117776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:44.118730 | orchestrator | Sunday 22 June 2025 19:46:44 +0000 (0:00:00.209) 0:00:29.502 *********** 2025-06-22 19:46:44.324018 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:44.325771 | orchestrator | 2025-06-22 19:46:44.327619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:44.327678 | orchestrator | Sunday 22 June 2025 19:46:44 +0000 (0:00:00.206) 0:00:29.709 *********** 2025-06-22 19:46:44.529851 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:44.530966 | orchestrator | 2025-06-22 19:46:44.532389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:44.533244 | orchestrator | Sunday 22 June 2025 19:46:44 +0000 (0:00:00.206) 0:00:29.915 *********** 2025-06-22 19:46:44.747431 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:44.748536 | orchestrator | 2025-06-22 19:46:44.748724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:44.749826 | orchestrator | Sunday 22 June 2025 19:46:44 +0000 (0:00:00.214) 0:00:30.130 *********** 2025-06-22 19:46:44.943550 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:44.943754 | orchestrator | 2025-06-22 19:46:44.944744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:44.945493 | orchestrator | Sunday 22 June 2025 19:46:44 +0000 (0:00:00.199) 0:00:30.329 *********** 2025-06-22 19:46:45.157910 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:45.158687 | orchestrator | 2025-06-22 19:46:45.159249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:45.160089 | orchestrator | Sunday 22 June 2025 19:46:45 +0000 (0:00:00.215) 0:00:30.544 *********** 2025-06-22 19:46:45.363305 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:45.364108 | orchestrator | 2025-06-22 19:46:45.365398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:45.366420 | orchestrator | Sunday 22 June 2025 19:46:45 +0000 (0:00:00.203) 0:00:30.748 *********** 2025-06-22 19:46:45.567312 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:45.567523 | orchestrator | 2025-06-22 19:46:45.568266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:45.569119 | orchestrator | Sunday 22 June 2025 19:46:45 +0000 (0:00:00.205) 0:00:30.953 *********** 2025-06-22 19:46:46.424072 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-22 19:46:46.424372 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-22 19:46:46.425147 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-22 19:46:46.425855 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-22 19:46:46.426797 | orchestrator | 2025-06-22 19:46:46.428303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:46.428329 | orchestrator | Sunday 22 June 2025 19:46:46 +0000 (0:00:00.856) 0:00:31.809 *********** 2025-06-22 19:46:46.641684 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:46.642572 | orchestrator | 2025-06-22 19:46:46.642907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:46.643322 | orchestrator | Sunday 22 June 2025 19:46:46 +0000 (0:00:00.218) 0:00:32.028 *********** 2025-06-22 19:46:46.852800 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:46.854406 | orchestrator | 2025-06-22 19:46:46.855642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:46.856733 | orchestrator | Sunday 22 June 2025 19:46:46 +0000 (0:00:00.210) 0:00:32.239 *********** 2025-06-22 19:46:47.601196 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:47.601306 | orchestrator | 2025-06-22 19:46:47.601604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:46:47.602410 | orchestrator | Sunday 22 June 2025 19:46:47 +0000 (0:00:00.742) 0:00:32.982 *********** 2025-06-22 19:46:47.808438 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:47.810871 | orchestrator | 2025-06-22 19:46:47.810933 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-22 19:46:47.811024 | orchestrator | Sunday 22 June 2025 19:46:47 +0000 (0:00:00.207) 0:00:33.189 *********** 2025-06-22 19:46:47.948127 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:47.948277 | orchestrator | 2025-06-22 19:46:47.949476 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-22 19:46:47.950339 | orchestrator | Sunday 22 June 2025 19:46:47 +0000 (0:00:00.144) 0:00:33.333 *********** 2025-06-22 19:46:48.147559 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b7d3102c-a914-5a7b-b709-ad20b0d5984a'}}) 2025-06-22 19:46:48.148240 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0c557b89-2e3b-5795-aff3-9e4ccad52f24'}}) 2025-06-22 19:46:48.149307 | orchestrator | 2025-06-22 19:46:48.150254 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-22 19:46:48.151170 | orchestrator | Sunday 22 June 2025 19:46:48 +0000 (0:00:00.200) 0:00:33.534 *********** 2025-06-22 19:46:49.945151 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'}) 2025-06-22 19:46:49.947438 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'}) 2025-06-22 19:46:49.948731 | orchestrator | 2025-06-22 19:46:49.949474 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-22 19:46:49.950199 | orchestrator | Sunday 22 June 2025 19:46:49 +0000 (0:00:01.795) 0:00:35.329 *********** 2025-06-22 19:46:50.103084 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:50.103455 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:50.104827 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:50.105412 | orchestrator | 2025-06-22 19:46:50.106467 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-22 19:46:50.108407 | orchestrator | Sunday 22 June 2025 19:46:50 +0000 (0:00:00.160) 0:00:35.490 *********** 2025-06-22 19:46:51.355892 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'}) 2025-06-22 19:46:51.356385 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'}) 2025-06-22 19:46:51.357340 | orchestrator | 2025-06-22 19:46:51.358383 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-22 19:46:51.359102 | orchestrator | Sunday 22 June 2025 19:46:51 +0000 (0:00:01.249) 0:00:36.739 *********** 2025-06-22 19:46:51.512508 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:51.512610 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:51.512626 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:51.512639 | orchestrator | 2025-06-22 19:46:51.512651 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-22 19:46:51.512664 | orchestrator | Sunday 22 June 2025 19:46:51 +0000 (0:00:00.157) 0:00:36.897 *********** 2025-06-22 19:46:51.658296 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:51.658376 | orchestrator | 2025-06-22 19:46:51.658695 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-22 19:46:51.659650 | orchestrator | Sunday 22 June 2025 19:46:51 +0000 (0:00:00.146) 0:00:37.043 *********** 2025-06-22 19:46:51.817399 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:51.817580 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:51.818205 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:51.818831 | orchestrator | 2025-06-22 19:46:51.819366 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-22 19:46:51.820235 | orchestrator | Sunday 22 June 2025 19:46:51 +0000 (0:00:00.157) 0:00:37.201 *********** 2025-06-22 19:46:51.960671 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:51.961073 | orchestrator | 2025-06-22 19:46:51.962515 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-22 19:46:51.963194 | orchestrator | Sunday 22 June 2025 19:46:51 +0000 (0:00:00.145) 0:00:37.346 *********** 2025-06-22 19:46:52.116031 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:52.117006 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:52.118838 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:52.119093 | orchestrator | 2025-06-22 19:46:52.120336 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-22 19:46:52.120779 | orchestrator | Sunday 22 June 2025 19:46:52 +0000 (0:00:00.153) 0:00:37.500 *********** 2025-06-22 19:46:52.507632 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:52.507831 | orchestrator | 2025-06-22 19:46:52.509032 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-22 19:46:52.510469 | orchestrator | Sunday 22 June 2025 19:46:52 +0000 (0:00:00.393) 0:00:37.893 *********** 2025-06-22 19:46:52.663214 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:52.663371 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:52.664016 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:52.665465 | orchestrator | 2025-06-22 19:46:52.665718 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-22 19:46:52.666864 | orchestrator | Sunday 22 June 2025 19:46:52 +0000 (0:00:00.155) 0:00:38.049 *********** 2025-06-22 19:46:52.795497 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:46:52.795603 | orchestrator | 2025-06-22 19:46:52.796399 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-22 19:46:52.797173 | orchestrator | Sunday 22 June 2025 19:46:52 +0000 (0:00:00.133) 0:00:38.182 *********** 2025-06-22 19:46:52.942578 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:52.943112 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:52.944407 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:52.946098 | orchestrator | 2025-06-22 19:46:52.947342 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-22 19:46:52.948469 | orchestrator | Sunday 22 June 2025 19:46:52 +0000 (0:00:00.146) 0:00:38.328 *********** 2025-06-22 19:46:53.090279 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:53.090498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:53.091733 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:53.094065 | orchestrator | 2025-06-22 19:46:53.094814 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-22 19:46:53.095168 | orchestrator | Sunday 22 June 2025 19:46:53 +0000 (0:00:00.146) 0:00:38.475 *********** 2025-06-22 19:46:53.238814 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:53.238926 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:53.239504 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:53.240646 | orchestrator | 2025-06-22 19:46:53.241048 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-22 19:46:53.242089 | orchestrator | Sunday 22 June 2025 19:46:53 +0000 (0:00:00.149) 0:00:38.624 *********** 2025-06-22 19:46:53.375316 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:53.375927 | orchestrator | 2025-06-22 19:46:53.376519 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-22 19:46:53.377661 | orchestrator | Sunday 22 June 2025 19:46:53 +0000 (0:00:00.137) 0:00:38.762 *********** 2025-06-22 19:46:53.503268 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:53.503371 | orchestrator | 2025-06-22 19:46:53.504775 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-22 19:46:53.504910 | orchestrator | Sunday 22 June 2025 19:46:53 +0000 (0:00:00.128) 0:00:38.891 *********** 2025-06-22 19:46:53.646265 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:53.647413 | orchestrator | 2025-06-22 19:46:53.648565 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-22 19:46:53.649363 | orchestrator | Sunday 22 June 2025 19:46:53 +0000 (0:00:00.142) 0:00:39.033 *********** 2025-06-22 19:46:53.786005 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:46:53.787028 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-22 19:46:53.787846 | orchestrator | } 2025-06-22 19:46:53.788517 | orchestrator | 2025-06-22 19:46:53.789498 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-22 19:46:53.789871 | orchestrator | Sunday 22 June 2025 19:46:53 +0000 (0:00:00.139) 0:00:39.173 *********** 2025-06-22 19:46:53.922204 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:46:53.924062 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-22 19:46:53.924679 | orchestrator | } 2025-06-22 19:46:53.925645 | orchestrator | 2025-06-22 19:46:53.926283 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-22 19:46:53.926938 | orchestrator | Sunday 22 June 2025 19:46:53 +0000 (0:00:00.135) 0:00:39.308 *********** 2025-06-22 19:46:54.056526 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:46:54.056720 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-22 19:46:54.057793 | orchestrator | } 2025-06-22 19:46:54.058313 | orchestrator | 2025-06-22 19:46:54.059295 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-22 19:46:54.059679 | orchestrator | Sunday 22 June 2025 19:46:54 +0000 (0:00:00.134) 0:00:39.442 *********** 2025-06-22 19:46:54.683227 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:46:54.683759 | orchestrator | 2025-06-22 19:46:54.685203 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-22 19:46:54.686404 | orchestrator | Sunday 22 June 2025 19:46:54 +0000 (0:00:00.626) 0:00:40.069 *********** 2025-06-22 19:46:55.150930 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:46:55.152157 | orchestrator | 2025-06-22 19:46:55.153511 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-22 19:46:55.154195 | orchestrator | Sunday 22 June 2025 19:46:55 +0000 (0:00:00.468) 0:00:40.537 *********** 2025-06-22 19:46:55.618716 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:46:55.619157 | orchestrator | 2025-06-22 19:46:55.620252 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-22 19:46:55.620384 | orchestrator | Sunday 22 June 2025 19:46:55 +0000 (0:00:00.467) 0:00:41.004 *********** 2025-06-22 19:46:55.738654 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:46:55.738735 | orchestrator | 2025-06-22 19:46:55.738781 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-22 19:46:55.739774 | orchestrator | Sunday 22 June 2025 19:46:55 +0000 (0:00:00.118) 0:00:41.123 *********** 2025-06-22 19:46:55.841035 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:55.841561 | orchestrator | 2025-06-22 19:46:55.842412 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-22 19:46:55.843328 | orchestrator | Sunday 22 June 2025 19:46:55 +0000 (0:00:00.105) 0:00:41.229 *********** 2025-06-22 19:46:55.936720 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:55.938121 | orchestrator | 2025-06-22 19:46:55.940057 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-22 19:46:55.941070 | orchestrator | Sunday 22 June 2025 19:46:55 +0000 (0:00:00.095) 0:00:41.324 *********** 2025-06-22 19:46:56.057017 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:46:56.057915 | orchestrator |  "vgs_report": { 2025-06-22 19:46:56.060821 | orchestrator |  "vg": [] 2025-06-22 19:46:56.060886 | orchestrator |  } 2025-06-22 19:46:56.061160 | orchestrator | } 2025-06-22 19:46:56.061731 | orchestrator | 2025-06-22 19:46:56.062332 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-22 19:46:56.064119 | orchestrator | Sunday 22 June 2025 19:46:56 +0000 (0:00:00.119) 0:00:41.444 *********** 2025-06-22 19:46:56.179099 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:56.179273 | orchestrator | 2025-06-22 19:46:56.180261 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-22 19:46:56.181735 | orchestrator | Sunday 22 June 2025 19:46:56 +0000 (0:00:00.122) 0:00:41.566 *********** 2025-06-22 19:46:56.299413 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:56.300006 | orchestrator | 2025-06-22 19:46:56.300897 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-22 19:46:56.302127 | orchestrator | Sunday 22 June 2025 19:46:56 +0000 (0:00:00.120) 0:00:41.687 *********** 2025-06-22 19:46:56.430206 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:56.430852 | orchestrator | 2025-06-22 19:46:56.432608 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-22 19:46:56.432654 | orchestrator | Sunday 22 June 2025 19:46:56 +0000 (0:00:00.130) 0:00:41.817 *********** 2025-06-22 19:46:56.548823 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:56.550450 | orchestrator | 2025-06-22 19:46:56.550484 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-22 19:46:56.550838 | orchestrator | Sunday 22 June 2025 19:46:56 +0000 (0:00:00.117) 0:00:41.935 *********** 2025-06-22 19:46:56.677310 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:56.678095 | orchestrator | 2025-06-22 19:46:56.678839 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-22 19:46:56.679719 | orchestrator | Sunday 22 June 2025 19:46:56 +0000 (0:00:00.128) 0:00:42.064 *********** 2025-06-22 19:46:56.956516 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:56.956730 | orchestrator | 2025-06-22 19:46:56.957614 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-22 19:46:56.957999 | orchestrator | Sunday 22 June 2025 19:46:56 +0000 (0:00:00.280) 0:00:42.344 *********** 2025-06-22 19:46:57.090586 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:57.090676 | orchestrator | 2025-06-22 19:46:57.093066 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-22 19:46:57.093573 | orchestrator | Sunday 22 June 2025 19:46:57 +0000 (0:00:00.132) 0:00:42.477 *********** 2025-06-22 19:46:57.212372 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:57.212871 | orchestrator | 2025-06-22 19:46:57.214571 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-22 19:46:57.215087 | orchestrator | Sunday 22 June 2025 19:46:57 +0000 (0:00:00.122) 0:00:42.599 *********** 2025-06-22 19:46:57.328279 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:57.328631 | orchestrator | 2025-06-22 19:46:57.329627 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-22 19:46:57.330444 | orchestrator | Sunday 22 June 2025 19:46:57 +0000 (0:00:00.115) 0:00:42.715 *********** 2025-06-22 19:46:57.460624 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:57.461621 | orchestrator | 2025-06-22 19:46:57.463454 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-22 19:46:57.463485 | orchestrator | Sunday 22 June 2025 19:46:57 +0000 (0:00:00.132) 0:00:42.848 *********** 2025-06-22 19:46:57.571883 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:57.572413 | orchestrator | 2025-06-22 19:46:57.573116 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-22 19:46:57.574689 | orchestrator | Sunday 22 June 2025 19:46:57 +0000 (0:00:00.111) 0:00:42.959 *********** 2025-06-22 19:46:57.701920 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:57.702615 | orchestrator | 2025-06-22 19:46:57.704406 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-22 19:46:57.704441 | orchestrator | Sunday 22 June 2025 19:46:57 +0000 (0:00:00.129) 0:00:43.088 *********** 2025-06-22 19:46:57.831274 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:57.832207 | orchestrator | 2025-06-22 19:46:57.833148 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-22 19:46:57.834328 | orchestrator | Sunday 22 June 2025 19:46:57 +0000 (0:00:00.130) 0:00:43.218 *********** 2025-06-22 19:46:57.960242 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:57.961391 | orchestrator | 2025-06-22 19:46:57.962622 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-22 19:46:57.963315 | orchestrator | Sunday 22 June 2025 19:46:57 +0000 (0:00:00.128) 0:00:43.347 *********** 2025-06-22 19:46:58.119476 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:58.119741 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:58.120728 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:58.121767 | orchestrator | 2025-06-22 19:46:58.122529 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-22 19:46:58.123533 | orchestrator | Sunday 22 June 2025 19:46:58 +0000 (0:00:00.158) 0:00:43.505 *********** 2025-06-22 19:46:58.253065 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:58.253788 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:58.254691 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:58.255599 | orchestrator | 2025-06-22 19:46:58.255947 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-22 19:46:58.256990 | orchestrator | Sunday 22 June 2025 19:46:58 +0000 (0:00:00.134) 0:00:43.640 *********** 2025-06-22 19:46:58.413299 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:58.413728 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:58.414462 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:58.415532 | orchestrator | 2025-06-22 19:46:58.416138 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-22 19:46:58.416567 | orchestrator | Sunday 22 June 2025 19:46:58 +0000 (0:00:00.159) 0:00:43.800 *********** 2025-06-22 19:46:58.692262 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:58.692326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:58.692791 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:58.693867 | orchestrator | 2025-06-22 19:46:58.694530 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-22 19:46:58.694943 | orchestrator | Sunday 22 June 2025 19:46:58 +0000 (0:00:00.279) 0:00:44.079 *********** 2025-06-22 19:46:58.835562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:58.836137 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:58.837134 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:58.837661 | orchestrator | 2025-06-22 19:46:58.838273 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-22 19:46:58.838644 | orchestrator | Sunday 22 June 2025 19:46:58 +0000 (0:00:00.143) 0:00:44.223 *********** 2025-06-22 19:46:58.991509 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:58.992016 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:58.993240 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:58.993497 | orchestrator | 2025-06-22 19:46:58.994327 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-22 19:46:58.994944 | orchestrator | Sunday 22 June 2025 19:46:58 +0000 (0:00:00.155) 0:00:44.378 *********** 2025-06-22 19:46:59.131189 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:59.131491 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:59.133027 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:59.133744 | orchestrator | 2025-06-22 19:46:59.134568 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-22 19:46:59.135170 | orchestrator | Sunday 22 June 2025 19:46:59 +0000 (0:00:00.138) 0:00:44.517 *********** 2025-06-22 19:46:59.266808 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:46:59.267198 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:46:59.268017 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:59.268689 | orchestrator | 2025-06-22 19:46:59.269137 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-22 19:46:59.269726 | orchestrator | Sunday 22 June 2025 19:46:59 +0000 (0:00:00.136) 0:00:44.654 *********** 2025-06-22 19:46:59.768717 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:46:59.768826 | orchestrator | 2025-06-22 19:46:59.769381 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-22 19:46:59.770309 | orchestrator | Sunday 22 June 2025 19:46:59 +0000 (0:00:00.501) 0:00:45.156 *********** 2025-06-22 19:47:00.244142 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:47:00.246852 | orchestrator | 2025-06-22 19:47:00.246906 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-22 19:47:00.247945 | orchestrator | Sunday 22 June 2025 19:47:00 +0000 (0:00:00.474) 0:00:45.631 *********** 2025-06-22 19:47:00.371114 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:47:00.372518 | orchestrator | 2025-06-22 19:47:00.373843 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-22 19:47:00.374145 | orchestrator | Sunday 22 June 2025 19:47:00 +0000 (0:00:00.125) 0:00:45.756 *********** 2025-06-22 19:47:00.523833 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'vg_name': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'}) 2025-06-22 19:47:00.525487 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'vg_name': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'}) 2025-06-22 19:47:00.526316 | orchestrator | 2025-06-22 19:47:00.526788 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-22 19:47:00.527357 | orchestrator | Sunday 22 June 2025 19:47:00 +0000 (0:00:00.154) 0:00:45.911 *********** 2025-06-22 19:47:00.661770 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:47:00.663210 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:47:00.663449 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:00.664076 | orchestrator | 2025-06-22 19:47:00.664687 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-22 19:47:00.665486 | orchestrator | Sunday 22 June 2025 19:47:00 +0000 (0:00:00.136) 0:00:46.047 *********** 2025-06-22 19:47:00.803569 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:47:00.804507 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:47:00.805116 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:00.805876 | orchestrator | 2025-06-22 19:47:00.806635 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-22 19:47:00.807250 | orchestrator | Sunday 22 June 2025 19:47:00 +0000 (0:00:00.141) 0:00:46.189 *********** 2025-06-22 19:47:00.946461 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'})  2025-06-22 19:47:00.946554 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'})  2025-06-22 19:47:00.946569 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:00.946582 | orchestrator | 2025-06-22 19:47:00.946724 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-22 19:47:00.946757 | orchestrator | Sunday 22 June 2025 19:47:00 +0000 (0:00:00.144) 0:00:46.334 *********** 2025-06-22 19:47:01.337650 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:47:01.338494 | orchestrator |  "lvm_report": { 2025-06-22 19:47:01.339396 | orchestrator |  "lv": [ 2025-06-22 19:47:01.340433 | orchestrator |  { 2025-06-22 19:47:01.341695 | orchestrator |  "lv_name": "osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24", 2025-06-22 19:47:01.342649 | orchestrator |  "vg_name": "ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24" 2025-06-22 19:47:01.343891 | orchestrator |  }, 2025-06-22 19:47:01.344074 | orchestrator |  { 2025-06-22 19:47:01.344595 | orchestrator |  "lv_name": "osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a", 2025-06-22 19:47:01.345148 | orchestrator |  "vg_name": "ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a" 2025-06-22 19:47:01.345483 | orchestrator |  } 2025-06-22 19:47:01.345900 | orchestrator |  ], 2025-06-22 19:47:01.346441 | orchestrator |  "pv": [ 2025-06-22 19:47:01.346850 | orchestrator |  { 2025-06-22 19:47:01.347313 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-22 19:47:01.347712 | orchestrator |  "vg_name": "ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a" 2025-06-22 19:47:01.348361 | orchestrator |  }, 2025-06-22 19:47:01.348638 | orchestrator |  { 2025-06-22 19:47:01.349076 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-22 19:47:01.349524 | orchestrator |  "vg_name": "ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24" 2025-06-22 19:47:01.349905 | orchestrator |  } 2025-06-22 19:47:01.350399 | orchestrator |  ] 2025-06-22 19:47:01.350776 | orchestrator |  } 2025-06-22 19:47:01.351281 | orchestrator | } 2025-06-22 19:47:01.351877 | orchestrator | 2025-06-22 19:47:01.352196 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-22 19:47:01.352695 | orchestrator | 2025-06-22 19:47:01.353128 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:47:01.353152 | orchestrator | Sunday 22 June 2025 19:47:01 +0000 (0:00:00.389) 0:00:46.723 *********** 2025-06-22 19:47:01.564050 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-22 19:47:01.564440 | orchestrator | 2025-06-22 19:47:01.565079 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:47:01.566308 | orchestrator | Sunday 22 June 2025 19:47:01 +0000 (0:00:00.228) 0:00:46.951 *********** 2025-06-22 19:47:01.767813 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:01.768291 | orchestrator | 2025-06-22 19:47:01.769119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:01.769796 | orchestrator | Sunday 22 June 2025 19:47:01 +0000 (0:00:00.203) 0:00:47.155 *********** 2025-06-22 19:47:02.138907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-22 19:47:02.140023 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-22 19:47:02.141152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-22 19:47:02.142201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-22 19:47:02.142855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-22 19:47:02.144075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-22 19:47:02.144368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-22 19:47:02.145054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-22 19:47:02.145599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-22 19:47:02.146166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-22 19:47:02.146917 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-22 19:47:02.147924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-22 19:47:02.148160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-22 19:47:02.148615 | orchestrator | 2025-06-22 19:47:02.149030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:02.149549 | orchestrator | Sunday 22 June 2025 19:47:02 +0000 (0:00:00.371) 0:00:47.526 *********** 2025-06-22 19:47:02.319798 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:02.319904 | orchestrator | 2025-06-22 19:47:02.320628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:02.321865 | orchestrator | Sunday 22 June 2025 19:47:02 +0000 (0:00:00.179) 0:00:47.705 *********** 2025-06-22 19:47:02.496251 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:02.496683 | orchestrator | 2025-06-22 19:47:02.496712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:02.496788 | orchestrator | Sunday 22 June 2025 19:47:02 +0000 (0:00:00.178) 0:00:47.884 *********** 2025-06-22 19:47:02.695392 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:02.695497 | orchestrator | 2025-06-22 19:47:02.695888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:02.696257 | orchestrator | Sunday 22 June 2025 19:47:02 +0000 (0:00:00.198) 0:00:48.082 *********** 2025-06-22 19:47:02.876315 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:02.876753 | orchestrator | 2025-06-22 19:47:02.877215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:02.878440 | orchestrator | Sunday 22 June 2025 19:47:02 +0000 (0:00:00.180) 0:00:48.263 *********** 2025-06-22 19:47:03.055851 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:03.056040 | orchestrator | 2025-06-22 19:47:03.056698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:03.058133 | orchestrator | Sunday 22 June 2025 19:47:03 +0000 (0:00:00.179) 0:00:48.442 *********** 2025-06-22 19:47:03.554346 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:03.555886 | orchestrator | 2025-06-22 19:47:03.555920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:03.556429 | orchestrator | Sunday 22 June 2025 19:47:03 +0000 (0:00:00.496) 0:00:48.939 *********** 2025-06-22 19:47:03.737061 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:03.737147 | orchestrator | 2025-06-22 19:47:03.737204 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:03.737241 | orchestrator | Sunday 22 June 2025 19:47:03 +0000 (0:00:00.181) 0:00:49.121 *********** 2025-06-22 19:47:03.928951 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:03.929790 | orchestrator | 2025-06-22 19:47:03.929827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:03.929842 | orchestrator | Sunday 22 June 2025 19:47:03 +0000 (0:00:00.195) 0:00:49.316 *********** 2025-06-22 19:47:04.307286 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157) 2025-06-22 19:47:04.307451 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157) 2025-06-22 19:47:04.307776 | orchestrator | 2025-06-22 19:47:04.308856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:04.309169 | orchestrator | Sunday 22 June 2025 19:47:04 +0000 (0:00:00.375) 0:00:49.692 *********** 2025-06-22 19:47:04.678742 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b) 2025-06-22 19:47:04.679919 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b) 2025-06-22 19:47:04.681189 | orchestrator | 2025-06-22 19:47:04.682546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:04.683230 | orchestrator | Sunday 22 June 2025 19:47:04 +0000 (0:00:00.373) 0:00:50.065 *********** 2025-06-22 19:47:05.065892 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d64e86dd-c29d-4edc-bf55-6282aedab238) 2025-06-22 19:47:05.066124 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d64e86dd-c29d-4edc-bf55-6282aedab238) 2025-06-22 19:47:05.067197 | orchestrator | 2025-06-22 19:47:05.068066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:05.069305 | orchestrator | Sunday 22 June 2025 19:47:05 +0000 (0:00:00.386) 0:00:50.452 *********** 2025-06-22 19:47:05.442745 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a34530e6-164e-4284-ba94-1682f51170e6) 2025-06-22 19:47:05.446093 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a34530e6-164e-4284-ba94-1682f51170e6) 2025-06-22 19:47:05.446469 | orchestrator | 2025-06-22 19:47:05.448306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:05.449011 | orchestrator | Sunday 22 June 2025 19:47:05 +0000 (0:00:00.376) 0:00:50.828 *********** 2025-06-22 19:47:05.752230 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:47:05.754566 | orchestrator | 2025-06-22 19:47:05.755175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:05.755743 | orchestrator | Sunday 22 June 2025 19:47:05 +0000 (0:00:00.311) 0:00:51.140 *********** 2025-06-22 19:47:06.142466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-22 19:47:06.143384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-22 19:47:06.145211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-22 19:47:06.146006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-22 19:47:06.146711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-22 19:47:06.147362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-22 19:47:06.148019 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-22 19:47:06.148609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-22 19:47:06.149093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-22 19:47:06.149703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-22 19:47:06.150071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-22 19:47:06.150543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-22 19:47:06.150869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-22 19:47:06.151489 | orchestrator | 2025-06-22 19:47:06.151683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:06.152235 | orchestrator | Sunday 22 June 2025 19:47:06 +0000 (0:00:00.390) 0:00:51.530 *********** 2025-06-22 19:47:06.353161 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:06.353667 | orchestrator | 2025-06-22 19:47:06.354422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:06.356797 | orchestrator | Sunday 22 June 2025 19:47:06 +0000 (0:00:00.208) 0:00:51.738 *********** 2025-06-22 19:47:06.527958 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:06.528065 | orchestrator | 2025-06-22 19:47:06.528446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:06.529086 | orchestrator | Sunday 22 June 2025 19:47:06 +0000 (0:00:00.176) 0:00:51.914 *********** 2025-06-22 19:47:06.977902 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:06.978535 | orchestrator | 2025-06-22 19:47:06.979191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:06.979849 | orchestrator | Sunday 22 June 2025 19:47:06 +0000 (0:00:00.448) 0:00:52.363 *********** 2025-06-22 19:47:07.150208 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:07.150831 | orchestrator | 2025-06-22 19:47:07.151597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:07.152403 | orchestrator | Sunday 22 June 2025 19:47:07 +0000 (0:00:00.174) 0:00:52.537 *********** 2025-06-22 19:47:07.329520 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:07.331136 | orchestrator | 2025-06-22 19:47:07.331750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:07.333428 | orchestrator | Sunday 22 June 2025 19:47:07 +0000 (0:00:00.175) 0:00:52.713 *********** 2025-06-22 19:47:07.508473 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:07.508915 | orchestrator | 2025-06-22 19:47:07.510002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:07.510574 | orchestrator | Sunday 22 June 2025 19:47:07 +0000 (0:00:00.182) 0:00:52.895 *********** 2025-06-22 19:47:07.710082 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:07.710155 | orchestrator | 2025-06-22 19:47:07.711151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:07.712057 | orchestrator | Sunday 22 June 2025 19:47:07 +0000 (0:00:00.196) 0:00:53.091 *********** 2025-06-22 19:47:07.879746 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:07.880372 | orchestrator | 2025-06-22 19:47:07.881136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:07.881901 | orchestrator | Sunday 22 June 2025 19:47:07 +0000 (0:00:00.175) 0:00:53.267 *********** 2025-06-22 19:47:08.466761 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-22 19:47:08.467617 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-22 19:47:08.468351 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-22 19:47:08.469238 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-22 19:47:08.470188 | orchestrator | 2025-06-22 19:47:08.470712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:08.471552 | orchestrator | Sunday 22 June 2025 19:47:08 +0000 (0:00:00.586) 0:00:53.853 *********** 2025-06-22 19:47:08.650806 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:08.651710 | orchestrator | 2025-06-22 19:47:08.652422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:08.653625 | orchestrator | Sunday 22 June 2025 19:47:08 +0000 (0:00:00.185) 0:00:54.038 *********** 2025-06-22 19:47:08.818088 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:08.818754 | orchestrator | 2025-06-22 19:47:08.819680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:08.820354 | orchestrator | Sunday 22 June 2025 19:47:08 +0000 (0:00:00.166) 0:00:54.205 *********** 2025-06-22 19:47:08.991704 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:08.992682 | orchestrator | 2025-06-22 19:47:08.993974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:08.994084 | orchestrator | Sunday 22 June 2025 19:47:08 +0000 (0:00:00.173) 0:00:54.378 *********** 2025-06-22 19:47:09.178752 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:09.179685 | orchestrator | 2025-06-22 19:47:09.181455 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-22 19:47:09.181479 | orchestrator | Sunday 22 June 2025 19:47:09 +0000 (0:00:00.187) 0:00:54.566 *********** 2025-06-22 19:47:09.433033 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:09.433650 | orchestrator | 2025-06-22 19:47:09.435261 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-22 19:47:09.435409 | orchestrator | Sunday 22 June 2025 19:47:09 +0000 (0:00:00.252) 0:00:54.819 *********** 2025-06-22 19:47:09.584049 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '26b627d5-c9a2-5c9e-a2df-a450422a30c2'}}) 2025-06-22 19:47:09.585035 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f64325fb-298e-5c24-b96e-fd5d866c56eb'}}) 2025-06-22 19:47:09.585876 | orchestrator | 2025-06-22 19:47:09.586625 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-22 19:47:09.587219 | orchestrator | Sunday 22 June 2025 19:47:09 +0000 (0:00:00.152) 0:00:54.971 *********** 2025-06-22 19:47:11.319784 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'}) 2025-06-22 19:47:11.320494 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'}) 2025-06-22 19:47:11.321563 | orchestrator | 2025-06-22 19:47:11.322115 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-22 19:47:11.323822 | orchestrator | Sunday 22 June 2025 19:47:11 +0000 (0:00:01.734) 0:00:56.705 *********** 2025-06-22 19:47:11.470633 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:11.471292 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:11.472064 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:11.472526 | orchestrator | 2025-06-22 19:47:11.473638 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-22 19:47:11.473659 | orchestrator | Sunday 22 June 2025 19:47:11 +0000 (0:00:00.150) 0:00:56.856 *********** 2025-06-22 19:47:12.677071 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'}) 2025-06-22 19:47:12.677917 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'}) 2025-06-22 19:47:12.678493 | orchestrator | 2025-06-22 19:47:12.679251 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-22 19:47:12.679827 | orchestrator | Sunday 22 June 2025 19:47:12 +0000 (0:00:01.205) 0:00:58.061 *********** 2025-06-22 19:47:12.829875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:12.830754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:12.831896 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:12.833567 | orchestrator | 2025-06-22 19:47:12.833593 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-22 19:47:12.834178 | orchestrator | Sunday 22 June 2025 19:47:12 +0000 (0:00:00.153) 0:00:58.215 *********** 2025-06-22 19:47:12.973281 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:12.975024 | orchestrator | 2025-06-22 19:47:12.976132 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-22 19:47:12.977350 | orchestrator | Sunday 22 June 2025 19:47:12 +0000 (0:00:00.144) 0:00:58.359 *********** 2025-06-22 19:47:13.124806 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:13.125566 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:13.127313 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:13.129882 | orchestrator | 2025-06-22 19:47:13.129910 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-22 19:47:13.130520 | orchestrator | Sunday 22 June 2025 19:47:13 +0000 (0:00:00.152) 0:00:58.511 *********** 2025-06-22 19:47:13.269773 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:13.270886 | orchestrator | 2025-06-22 19:47:13.274136 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-22 19:47:13.274167 | orchestrator | Sunday 22 June 2025 19:47:13 +0000 (0:00:00.143) 0:00:58.655 *********** 2025-06-22 19:47:13.424190 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:13.425394 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:13.426792 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:13.428265 | orchestrator | 2025-06-22 19:47:13.429172 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-22 19:47:13.429950 | orchestrator | Sunday 22 June 2025 19:47:13 +0000 (0:00:00.154) 0:00:58.809 *********** 2025-06-22 19:47:13.572215 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:13.572385 | orchestrator | 2025-06-22 19:47:13.573440 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-22 19:47:13.575409 | orchestrator | Sunday 22 June 2025 19:47:13 +0000 (0:00:00.146) 0:00:58.955 *********** 2025-06-22 19:47:13.714968 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:13.716480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:13.718153 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:13.719379 | orchestrator | 2025-06-22 19:47:13.720281 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-22 19:47:13.721054 | orchestrator | Sunday 22 June 2025 19:47:13 +0000 (0:00:00.145) 0:00:59.101 *********** 2025-06-22 19:47:13.859398 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:13.860324 | orchestrator | 2025-06-22 19:47:13.861377 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-22 19:47:13.862425 | orchestrator | Sunday 22 June 2025 19:47:13 +0000 (0:00:00.144) 0:00:59.245 *********** 2025-06-22 19:47:14.272817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:14.274190 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:14.275722 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:14.276572 | orchestrator | 2025-06-22 19:47:14.277679 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-22 19:47:14.278701 | orchestrator | Sunday 22 June 2025 19:47:14 +0000 (0:00:00.413) 0:00:59.659 *********** 2025-06-22 19:47:14.425146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:14.425579 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:14.426933 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:14.427544 | orchestrator | 2025-06-22 19:47:14.428350 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-22 19:47:14.429407 | orchestrator | Sunday 22 June 2025 19:47:14 +0000 (0:00:00.151) 0:00:59.810 *********** 2025-06-22 19:47:14.573793 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:14.575183 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:14.576014 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:14.577117 | orchestrator | 2025-06-22 19:47:14.578478 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-22 19:47:14.578968 | orchestrator | Sunday 22 June 2025 19:47:14 +0000 (0:00:00.149) 0:00:59.960 *********** 2025-06-22 19:47:14.723021 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:14.723536 | orchestrator | 2025-06-22 19:47:14.724882 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-22 19:47:14.725385 | orchestrator | Sunday 22 June 2025 19:47:14 +0000 (0:00:00.149) 0:01:00.110 *********** 2025-06-22 19:47:14.867414 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:14.867857 | orchestrator | 2025-06-22 19:47:14.868715 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-22 19:47:14.869533 | orchestrator | Sunday 22 June 2025 19:47:14 +0000 (0:00:00.144) 0:01:00.254 *********** 2025-06-22 19:47:15.006774 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:15.007335 | orchestrator | 2025-06-22 19:47:15.007864 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-22 19:47:15.009144 | orchestrator | Sunday 22 June 2025 19:47:14 +0000 (0:00:00.138) 0:01:00.393 *********** 2025-06-22 19:47:15.131406 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:47:15.131539 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-22 19:47:15.131645 | orchestrator | } 2025-06-22 19:47:15.132472 | orchestrator | 2025-06-22 19:47:15.132874 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-22 19:47:15.133532 | orchestrator | Sunday 22 June 2025 19:47:15 +0000 (0:00:00.125) 0:01:00.518 *********** 2025-06-22 19:47:15.266105 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:47:15.266207 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-22 19:47:15.266865 | orchestrator | } 2025-06-22 19:47:15.267177 | orchestrator | 2025-06-22 19:47:15.267894 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-22 19:47:15.268383 | orchestrator | Sunday 22 June 2025 19:47:15 +0000 (0:00:00.134) 0:01:00.653 *********** 2025-06-22 19:47:15.405744 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:47:15.405983 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-22 19:47:15.408023 | orchestrator | } 2025-06-22 19:47:15.409943 | orchestrator | 2025-06-22 19:47:15.410165 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-22 19:47:15.410834 | orchestrator | Sunday 22 June 2025 19:47:15 +0000 (0:00:00.138) 0:01:00.791 *********** 2025-06-22 19:47:15.872778 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:15.873766 | orchestrator | 2025-06-22 19:47:15.874480 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-22 19:47:15.875377 | orchestrator | Sunday 22 June 2025 19:47:15 +0000 (0:00:00.468) 0:01:01.260 *********** 2025-06-22 19:47:16.390155 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:16.390259 | orchestrator | 2025-06-22 19:47:16.390671 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-22 19:47:16.391482 | orchestrator | Sunday 22 June 2025 19:47:16 +0000 (0:00:00.516) 0:01:01.776 *********** 2025-06-22 19:47:16.861602 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:16.861984 | orchestrator | 2025-06-22 19:47:16.862862 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-22 19:47:16.863368 | orchestrator | Sunday 22 June 2025 19:47:16 +0000 (0:00:00.469) 0:01:02.246 *********** 2025-06-22 19:47:17.238504 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:17.239535 | orchestrator | 2025-06-22 19:47:17.240382 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-22 19:47:17.241212 | orchestrator | Sunday 22 June 2025 19:47:17 +0000 (0:00:00.379) 0:01:02.625 *********** 2025-06-22 19:47:17.351422 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:17.351758 | orchestrator | 2025-06-22 19:47:17.352462 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-22 19:47:17.354217 | orchestrator | Sunday 22 June 2025 19:47:17 +0000 (0:00:00.112) 0:01:02.738 *********** 2025-06-22 19:47:17.470317 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:17.473787 | orchestrator | 2025-06-22 19:47:17.473853 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-22 19:47:17.475037 | orchestrator | Sunday 22 June 2025 19:47:17 +0000 (0:00:00.114) 0:01:02.852 *********** 2025-06-22 19:47:17.605458 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:47:17.606415 | orchestrator |  "vgs_report": { 2025-06-22 19:47:17.608428 | orchestrator |  "vg": [] 2025-06-22 19:47:17.609511 | orchestrator |  } 2025-06-22 19:47:17.610418 | orchestrator | } 2025-06-22 19:47:17.611054 | orchestrator | 2025-06-22 19:47:17.611666 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-22 19:47:17.612340 | orchestrator | Sunday 22 June 2025 19:47:17 +0000 (0:00:00.140) 0:01:02.992 *********** 2025-06-22 19:47:17.731695 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:17.732116 | orchestrator | 2025-06-22 19:47:17.732468 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-22 19:47:17.733163 | orchestrator | Sunday 22 June 2025 19:47:17 +0000 (0:00:00.125) 0:01:03.118 *********** 2025-06-22 19:47:17.864957 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:17.865120 | orchestrator | 2025-06-22 19:47:17.865138 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-22 19:47:17.865152 | orchestrator | Sunday 22 June 2025 19:47:17 +0000 (0:00:00.130) 0:01:03.249 *********** 2025-06-22 19:47:18.017177 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:18.017276 | orchestrator | 2025-06-22 19:47:18.017290 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-22 19:47:18.017406 | orchestrator | Sunday 22 June 2025 19:47:18 +0000 (0:00:00.151) 0:01:03.401 *********** 2025-06-22 19:47:18.156613 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:18.157619 | orchestrator | 2025-06-22 19:47:18.158366 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-22 19:47:18.159152 | orchestrator | Sunday 22 June 2025 19:47:18 +0000 (0:00:00.142) 0:01:03.543 *********** 2025-06-22 19:47:18.287895 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:18.289439 | orchestrator | 2025-06-22 19:47:18.289726 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-22 19:47:18.290628 | orchestrator | Sunday 22 June 2025 19:47:18 +0000 (0:00:00.128) 0:01:03.672 *********** 2025-06-22 19:47:18.431216 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:18.431312 | orchestrator | 2025-06-22 19:47:18.431427 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-22 19:47:18.432122 | orchestrator | Sunday 22 June 2025 19:47:18 +0000 (0:00:00.144) 0:01:03.816 *********** 2025-06-22 19:47:18.572347 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:18.572925 | orchestrator | 2025-06-22 19:47:18.573693 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-22 19:47:18.574566 | orchestrator | Sunday 22 June 2025 19:47:18 +0000 (0:00:00.140) 0:01:03.957 *********** 2025-06-22 19:47:18.713440 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:18.714259 | orchestrator | 2025-06-22 19:47:18.715249 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-22 19:47:18.717182 | orchestrator | Sunday 22 June 2025 19:47:18 +0000 (0:00:00.141) 0:01:04.099 *********** 2025-06-22 19:47:19.084927 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:19.085569 | orchestrator | 2025-06-22 19:47:19.086811 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-22 19:47:19.088259 | orchestrator | Sunday 22 June 2025 19:47:19 +0000 (0:00:00.371) 0:01:04.470 *********** 2025-06-22 19:47:19.229159 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:19.229417 | orchestrator | 2025-06-22 19:47:19.230667 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-22 19:47:19.231103 | orchestrator | Sunday 22 June 2025 19:47:19 +0000 (0:00:00.144) 0:01:04.614 *********** 2025-06-22 19:47:19.363103 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:19.363278 | orchestrator | 2025-06-22 19:47:19.364148 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-22 19:47:19.365094 | orchestrator | Sunday 22 June 2025 19:47:19 +0000 (0:00:00.134) 0:01:04.749 *********** 2025-06-22 19:47:19.505844 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:19.506681 | orchestrator | 2025-06-22 19:47:19.508738 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-22 19:47:19.510197 | orchestrator | Sunday 22 June 2025 19:47:19 +0000 (0:00:00.139) 0:01:04.889 *********** 2025-06-22 19:47:19.641615 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:19.642175 | orchestrator | 2025-06-22 19:47:19.642792 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-22 19:47:19.643707 | orchestrator | Sunday 22 June 2025 19:47:19 +0000 (0:00:00.138) 0:01:05.027 *********** 2025-06-22 19:47:19.787281 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:19.787387 | orchestrator | 2025-06-22 19:47:19.788386 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-22 19:47:19.789493 | orchestrator | Sunday 22 June 2025 19:47:19 +0000 (0:00:00.145) 0:01:05.173 *********** 2025-06-22 19:47:19.940682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:19.940775 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:19.942155 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:19.944481 | orchestrator | 2025-06-22 19:47:19.944503 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-22 19:47:19.944954 | orchestrator | Sunday 22 June 2025 19:47:19 +0000 (0:00:00.152) 0:01:05.326 *********** 2025-06-22 19:47:20.099488 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:20.099587 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:20.100176 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:20.101286 | orchestrator | 2025-06-22 19:47:20.102373 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-22 19:47:20.103511 | orchestrator | Sunday 22 June 2025 19:47:20 +0000 (0:00:00.158) 0:01:05.485 *********** 2025-06-22 19:47:20.256884 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:20.256927 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:20.257458 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:20.258289 | orchestrator | 2025-06-22 19:47:20.258869 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-22 19:47:20.259545 | orchestrator | Sunday 22 June 2025 19:47:20 +0000 (0:00:00.158) 0:01:05.643 *********** 2025-06-22 19:47:20.413660 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:20.413716 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:20.414389 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:20.415419 | orchestrator | 2025-06-22 19:47:20.415897 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-22 19:47:20.417229 | orchestrator | Sunday 22 June 2025 19:47:20 +0000 (0:00:00.151) 0:01:05.794 *********** 2025-06-22 19:47:20.578572 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:20.579084 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:20.580444 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:20.580991 | orchestrator | 2025-06-22 19:47:20.582887 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-22 19:47:20.583505 | orchestrator | Sunday 22 June 2025 19:47:20 +0000 (0:00:00.169) 0:01:05.964 *********** 2025-06-22 19:47:20.729223 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:20.730170 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:20.733318 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:20.733673 | orchestrator | 2025-06-22 19:47:20.734644 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-22 19:47:20.735582 | orchestrator | Sunday 22 June 2025 19:47:20 +0000 (0:00:00.150) 0:01:06.114 *********** 2025-06-22 19:47:21.121668 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:21.122338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:21.123226 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:21.124294 | orchestrator | 2025-06-22 19:47:21.125640 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-22 19:47:21.126697 | orchestrator | Sunday 22 June 2025 19:47:21 +0000 (0:00:00.392) 0:01:06.507 *********** 2025-06-22 19:47:21.284473 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:21.284624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:21.285788 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:21.286821 | orchestrator | 2025-06-22 19:47:21.287731 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-22 19:47:21.288736 | orchestrator | Sunday 22 June 2025 19:47:21 +0000 (0:00:00.162) 0:01:06.670 *********** 2025-06-22 19:47:21.755900 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:21.756192 | orchestrator | 2025-06-22 19:47:21.757116 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-22 19:47:21.758537 | orchestrator | Sunday 22 June 2025 19:47:21 +0000 (0:00:00.468) 0:01:07.139 *********** 2025-06-22 19:47:22.215693 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:22.215855 | orchestrator | 2025-06-22 19:47:22.217222 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-22 19:47:22.217972 | orchestrator | Sunday 22 June 2025 19:47:22 +0000 (0:00:00.462) 0:01:07.602 *********** 2025-06-22 19:47:22.367753 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:22.368235 | orchestrator | 2025-06-22 19:47:22.369985 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-22 19:47:22.370097 | orchestrator | Sunday 22 June 2025 19:47:22 +0000 (0:00:00.150) 0:01:07.752 *********** 2025-06-22 19:47:22.541233 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'vg_name': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'}) 2025-06-22 19:47:22.541871 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'vg_name': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'}) 2025-06-22 19:47:22.543420 | orchestrator | 2025-06-22 19:47:22.544506 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-22 19:47:22.545968 | orchestrator | Sunday 22 June 2025 19:47:22 +0000 (0:00:00.174) 0:01:07.927 *********** 2025-06-22 19:47:22.705804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:22.706217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:22.707603 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:22.708874 | orchestrator | 2025-06-22 19:47:22.710327 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-22 19:47:22.711612 | orchestrator | Sunday 22 June 2025 19:47:22 +0000 (0:00:00.163) 0:01:08.090 *********** 2025-06-22 19:47:22.871679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:22.872822 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:22.873805 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:22.876572 | orchestrator | 2025-06-22 19:47:22.877615 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-22 19:47:22.878440 | orchestrator | Sunday 22 June 2025 19:47:22 +0000 (0:00:00.167) 0:01:08.258 *********** 2025-06-22 19:47:23.029977 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'})  2025-06-22 19:47:23.030343 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'})  2025-06-22 19:47:23.031448 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:23.031900 | orchestrator | 2025-06-22 19:47:23.032659 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-22 19:47:23.033410 | orchestrator | Sunday 22 June 2025 19:47:23 +0000 (0:00:00.158) 0:01:08.416 *********** 2025-06-22 19:47:23.171878 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:47:23.171974 | orchestrator |  "lvm_report": { 2025-06-22 19:47:23.172604 | orchestrator |  "lv": [ 2025-06-22 19:47:23.173143 | orchestrator |  { 2025-06-22 19:47:23.174634 | orchestrator |  "lv_name": "osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2", 2025-06-22 19:47:23.174751 | orchestrator |  "vg_name": "ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2" 2025-06-22 19:47:23.175273 | orchestrator |  }, 2025-06-22 19:47:23.176373 | orchestrator |  { 2025-06-22 19:47:23.176803 | orchestrator |  "lv_name": "osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb", 2025-06-22 19:47:23.177205 | orchestrator |  "vg_name": "ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb" 2025-06-22 19:47:23.177889 | orchestrator |  } 2025-06-22 19:47:23.178522 | orchestrator |  ], 2025-06-22 19:47:23.180225 | orchestrator |  "pv": [ 2025-06-22 19:47:23.180714 | orchestrator |  { 2025-06-22 19:47:23.181189 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-22 19:47:23.181450 | orchestrator |  "vg_name": "ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2" 2025-06-22 19:47:23.182226 | orchestrator |  }, 2025-06-22 19:47:23.182661 | orchestrator |  { 2025-06-22 19:47:23.183127 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-22 19:47:23.183752 | orchestrator |  "vg_name": "ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb" 2025-06-22 19:47:23.183969 | orchestrator |  } 2025-06-22 19:47:23.184144 | orchestrator |  ] 2025-06-22 19:47:23.184648 | orchestrator |  } 2025-06-22 19:47:23.185227 | orchestrator | } 2025-06-22 19:47:23.185662 | orchestrator | 2025-06-22 19:47:23.186130 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:47:23.186548 | orchestrator | 2025-06-22 19:47:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:47:23.186738 | orchestrator | 2025-06-22 19:47:23 | INFO  | Please wait and do not abort execution. 2025-06-22 19:47:23.187192 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-22 19:47:23.187639 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-22 19:47:23.187972 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-22 19:47:23.188354 | orchestrator | 2025-06-22 19:47:23.188897 | orchestrator | 2025-06-22 19:47:23.189425 | orchestrator | 2025-06-22 19:47:23.189769 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:47:23.189990 | orchestrator | Sunday 22 June 2025 19:47:23 +0000 (0:00:00.141) 0:01:08.558 *********** 2025-06-22 19:47:23.190498 | orchestrator | =============================================================================== 2025-06-22 19:47:23.190951 | orchestrator | Create block VGs -------------------------------------------------------- 5.40s 2025-06-22 19:47:23.191311 | orchestrator | Create block LVs -------------------------------------------------------- 3.88s 2025-06-22 19:47:23.191759 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.74s 2025-06-22 19:47:23.192284 | orchestrator | Add known partitions to the list of available block devices ------------- 1.48s 2025-06-22 19:47:23.192558 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.45s 2025-06-22 19:47:23.192854 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.45s 2025-06-22 19:47:23.193339 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.43s 2025-06-22 19:47:23.193949 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.42s 2025-06-22 19:47:23.194341 | orchestrator | Add known links to the list of available block devices ------------------ 1.12s 2025-06-22 19:47:23.194621 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2025-06-22 19:47:23.194948 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2025-06-22 19:47:23.195261 | orchestrator | Print LVM report data --------------------------------------------------- 0.81s 2025-06-22 19:47:23.195560 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-06-22 19:47:23.195798 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-06-22 19:47:23.196107 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.71s 2025-06-22 19:47:23.196383 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.71s 2025-06-22 19:47:23.196625 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2025-06-22 19:47:23.196877 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.68s 2025-06-22 19:47:23.197084 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.68s 2025-06-22 19:47:23.197428 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.67s 2025-06-22 19:47:25.725656 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:47:25.725746 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:47:25.725760 | orchestrator | Registering Redlock._release_script 2025-06-22 19:47:25.784675 | orchestrator | 2025-06-22 19:47:25 | INFO  | Task 579bf19e-c90d-499e-80a0-e17c401d468b (facts) was prepared for execution. 2025-06-22 19:47:25.784743 | orchestrator | 2025-06-22 19:47:25 | INFO  | It takes a moment until task 579bf19e-c90d-499e-80a0-e17c401d468b (facts) has been started and output is visible here. 2025-06-22 19:47:29.641783 | orchestrator | 2025-06-22 19:47:29.645694 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-22 19:47:29.646328 | orchestrator | 2025-06-22 19:47:29.646812 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 19:47:29.647559 | orchestrator | Sunday 22 June 2025 19:47:29 +0000 (0:00:00.204) 0:00:00.204 *********** 2025-06-22 19:47:30.514673 | orchestrator | ok: [testbed-manager] 2025-06-22 19:47:30.515641 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:47:30.518584 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:47:30.518609 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:47:30.518621 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:47:30.518633 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:47:30.519234 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:30.519783 | orchestrator | 2025-06-22 19:47:30.520867 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 19:47:30.521829 | orchestrator | Sunday 22 June 2025 19:47:30 +0000 (0:00:00.872) 0:00:01.077 *********** 2025-06-22 19:47:30.658609 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:47:30.729831 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:47:30.801544 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:47:30.871643 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:47:30.941542 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:31.586750 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:31.587518 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:31.588461 | orchestrator | 2025-06-22 19:47:31.589184 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:47:31.591325 | orchestrator | 2025-06-22 19:47:31.591414 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:47:31.591512 | orchestrator | Sunday 22 June 2025 19:47:31 +0000 (0:00:01.075) 0:00:02.152 *********** 2025-06-22 19:47:36.044570 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:47:36.044675 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:47:36.045013 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:47:36.046452 | orchestrator | ok: [testbed-manager] 2025-06-22 19:47:36.046821 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:47:36.048494 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:36.049417 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:47:36.050530 | orchestrator | 2025-06-22 19:47:36.051326 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 19:47:36.052170 | orchestrator | 2025-06-22 19:47:36.053164 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 19:47:36.053772 | orchestrator | Sunday 22 June 2025 19:47:36 +0000 (0:00:04.454) 0:00:06.607 *********** 2025-06-22 19:47:36.211346 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:47:36.290605 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:47:36.368010 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:47:36.462337 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:47:36.544291 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:36.586979 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:36.587108 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:36.588525 | orchestrator | 2025-06-22 19:47:36.590206 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:47:36.590636 | orchestrator | 2025-06-22 19:47:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:47:36.591389 | orchestrator | 2025-06-22 19:47:36 | INFO  | Please wait and do not abort execution. 2025-06-22 19:47:36.592324 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:47:36.593406 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:47:36.594007 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:47:36.594747 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:47:36.595468 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:47:36.596005 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:47:36.596841 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:47:36.597394 | orchestrator | 2025-06-22 19:47:36.598109 | orchestrator | 2025-06-22 19:47:36.598708 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:47:36.599352 | orchestrator | Sunday 22 June 2025 19:47:36 +0000 (0:00:00.544) 0:00:07.152 *********** 2025-06-22 19:47:36.599963 | orchestrator | =============================================================================== 2025-06-22 19:47:36.600480 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.45s 2025-06-22 19:47:36.601495 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2025-06-22 19:47:36.601973 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.87s 2025-06-22 19:47:36.602588 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-06-22 19:47:37.259362 | orchestrator | 2025-06-22 19:47:37.262165 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Jun 22 19:47:37 UTC 2025 2025-06-22 19:47:37.262214 | orchestrator | 2025-06-22 19:47:38.987756 | orchestrator | 2025-06-22 19:47:38 | INFO  | Collection nutshell is prepared for execution 2025-06-22 19:47:38.987845 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [0] - dotfiles 2025-06-22 19:47:38.992444 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:47:38.992467 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:47:38.992476 | orchestrator | Registering Redlock._release_script 2025-06-22 19:47:38.997814 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [0] - homer 2025-06-22 19:47:38.997858 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [0] - netdata 2025-06-22 19:47:38.997870 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [0] - openstackclient 2025-06-22 19:47:38.997882 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [0] - phpmyadmin 2025-06-22 19:47:38.997922 | orchestrator | 2025-06-22 19:47:38 | INFO  | A [0] - common 2025-06-22 19:47:38.999534 | orchestrator | 2025-06-22 19:47:38 | INFO  | A [1] -- loadbalancer 2025-06-22 19:47:39.000165 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [2] --- opensearch 2025-06-22 19:47:39.000203 | orchestrator | 2025-06-22 19:47:38 | INFO  | A [2] --- mariadb-ng 2025-06-22 19:47:39.000222 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [3] ---- horizon 2025-06-22 19:47:39.000239 | orchestrator | 2025-06-22 19:47:38 | INFO  | A [3] ---- keystone 2025-06-22 19:47:39.000258 | orchestrator | 2025-06-22 19:47:38 | INFO  | A [4] ----- neutron 2025-06-22 19:47:39.000277 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [5] ------ wait-for-nova 2025-06-22 19:47:39.000297 | orchestrator | 2025-06-22 19:47:38 | INFO  | A [5] ------ octavia 2025-06-22 19:47:39.000700 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [4] ----- barbican 2025-06-22 19:47:39.000753 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [4] ----- designate 2025-06-22 19:47:39.000860 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [4] ----- ironic 2025-06-22 19:47:39.000876 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [4] ----- placement 2025-06-22 19:47:39.000888 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [4] ----- magnum 2025-06-22 19:47:39.001518 | orchestrator | 2025-06-22 19:47:38 | INFO  | A [1] -- openvswitch 2025-06-22 19:47:39.001546 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [2] --- ovn 2025-06-22 19:47:39.001608 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [1] -- memcached 2025-06-22 19:47:39.001739 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [1] -- redis 2025-06-22 19:47:39.001756 | orchestrator | 2025-06-22 19:47:38 | INFO  | D [1] -- rabbitmq-ng 2025-06-22 19:47:39.001865 | orchestrator | 2025-06-22 19:47:38 | INFO  | A [0] - kubernetes 2025-06-22 19:47:39.003559 | orchestrator | 2025-06-22 19:47:39 | INFO  | D [1] -- kubeconfig 2025-06-22 19:47:39.003592 | orchestrator | 2025-06-22 19:47:39 | INFO  | A [1] -- copy-kubeconfig 2025-06-22 19:47:39.003773 | orchestrator | 2025-06-22 19:47:39 | INFO  | A [0] - ceph 2025-06-22 19:47:39.005324 | orchestrator | 2025-06-22 19:47:39 | INFO  | A [1] -- ceph-pools 2025-06-22 19:47:39.005350 | orchestrator | 2025-06-22 19:47:39 | INFO  | A [2] --- copy-ceph-keys 2025-06-22 19:47:39.005605 | orchestrator | 2025-06-22 19:47:39 | INFO  | A [3] ---- cephclient 2025-06-22 19:47:39.005626 | orchestrator | 2025-06-22 19:47:39 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-22 19:47:39.005637 | orchestrator | 2025-06-22 19:47:39 | INFO  | A [4] ----- wait-for-keystone 2025-06-22 19:47:39.005712 | orchestrator | 2025-06-22 19:47:39 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-22 19:47:39.005728 | orchestrator | 2025-06-22 19:47:39 | INFO  | D [5] ------ glance 2025-06-22 19:47:39.006129 | orchestrator | 2025-06-22 19:47:39 | INFO  | D [5] ------ cinder 2025-06-22 19:47:39.006162 | orchestrator | 2025-06-22 19:47:39 | INFO  | D [5] ------ nova 2025-06-22 19:47:39.006289 | orchestrator | 2025-06-22 19:47:39 | INFO  | A [4] ----- prometheus 2025-06-22 19:47:39.006308 | orchestrator | 2025-06-22 19:47:39 | INFO  | D [5] ------ grafana 2025-06-22 19:47:39.181297 | orchestrator | 2025-06-22 19:47:39 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-22 19:47:39.181401 | orchestrator | 2025-06-22 19:47:39 | INFO  | Tasks are running in the background 2025-06-22 19:47:41.813627 | orchestrator | 2025-06-22 19:47:41 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-22 19:47:43.927813 | orchestrator | 2025-06-22 19:47:43 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:47:43.931437 | orchestrator | 2025-06-22 19:47:43 | INFO  | Task bccf67b7-9eb4-48ef-960c-7309558780bb is in state STARTED 2025-06-22 19:47:43.938570 | orchestrator | 2025-06-22 19:47:43 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:47:43.938618 | orchestrator | 2025-06-22 19:47:43 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:47:43.938914 | orchestrator | 2025-06-22 19:47:43 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:47:43.946571 | orchestrator | 2025-06-22 19:47:43 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:47:43.946619 | orchestrator | 2025-06-22 19:47:43 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:47:43.946631 | orchestrator | 2025-06-22 19:47:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:47:46.978357 | orchestrator | 2025-06-22 19:47:46 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:47:46.980602 | orchestrator | 2025-06-22 19:47:46 | INFO  | Task bccf67b7-9eb4-48ef-960c-7309558780bb is in state STARTED 2025-06-22 19:47:46.980649 | orchestrator | 2025-06-22 19:47:46 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:47:46.980998 | orchestrator | 2025-06-22 19:47:46 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:47:46.984948 | orchestrator | 2025-06-22 19:47:46 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:47:46.985418 | orchestrator | 2025-06-22 19:47:46 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:47:46.985957 | orchestrator | 2025-06-22 19:47:46 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:47:46.985996 | orchestrator | 2025-06-22 19:47:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:47:50.021717 | orchestrator | 2025-06-22 19:47:50 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:47:50.021800 | orchestrator | 2025-06-22 19:47:50 | INFO  | Task bccf67b7-9eb4-48ef-960c-7309558780bb is in state STARTED 2025-06-22 19:47:50.023196 | orchestrator | 2025-06-22 19:47:50 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:47:50.024898 | orchestrator | 2025-06-22 19:47:50 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:47:50.027936 | orchestrator | 2025-06-22 19:47:50 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:47:50.028367 | orchestrator | 2025-06-22 19:47:50 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:47:50.028977 | orchestrator | 2025-06-22 19:47:50 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:47:50.029072 | orchestrator | 2025-06-22 19:47:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:47:53.091683 | orchestrator | 2025-06-22 19:47:53 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:47:53.091771 | orchestrator | 2025-06-22 19:47:53 | INFO  | Task bccf67b7-9eb4-48ef-960c-7309558780bb is in state STARTED 2025-06-22 19:47:53.091787 | orchestrator | 2025-06-22 19:47:53 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:47:53.092327 | orchestrator | 2025-06-22 19:47:53 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:47:53.094730 | orchestrator | 2025-06-22 19:47:53 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:47:53.094821 | orchestrator | 2025-06-22 19:47:53 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:47:53.100833 | orchestrator | 2025-06-22 19:47:53 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:47:53.100896 | orchestrator | 2025-06-22 19:47:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:47:56.227883 | orchestrator | 2025-06-22 19:47:56 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:47:56.227975 | orchestrator | 2025-06-22 19:47:56 | INFO  | Task bccf67b7-9eb4-48ef-960c-7309558780bb is in state STARTED 2025-06-22 19:47:56.231042 | orchestrator | 2025-06-22 19:47:56 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:47:56.237026 | orchestrator | 2025-06-22 19:47:56 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:47:56.237090 | orchestrator | 2025-06-22 19:47:56 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:47:56.237103 | orchestrator | 2025-06-22 19:47:56 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:47:56.237114 | orchestrator | 2025-06-22 19:47:56 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:47:56.237169 | orchestrator | 2025-06-22 19:47:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:47:59.306946 | orchestrator | 2025-06-22 19:47:59 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:47:59.307577 | orchestrator | 2025-06-22 19:47:59 | INFO  | Task bccf67b7-9eb4-48ef-960c-7309558780bb is in state STARTED 2025-06-22 19:47:59.308247 | orchestrator | 2025-06-22 19:47:59 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:47:59.309986 | orchestrator | 2025-06-22 19:47:59 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:47:59.310616 | orchestrator | 2025-06-22 19:47:59 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:47:59.311636 | orchestrator | 2025-06-22 19:47:59 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:47:59.312207 | orchestrator | 2025-06-22 19:47:59 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:47:59.312218 | orchestrator | 2025-06-22 19:47:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:02.381773 | orchestrator | 2025-06-22 19:48:02 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:02.381871 | orchestrator | 2025-06-22 19:48:02 | INFO  | Task bccf67b7-9eb4-48ef-960c-7309558780bb is in state STARTED 2025-06-22 19:48:02.381887 | orchestrator | 2025-06-22 19:48:02 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:02.381899 | orchestrator | 2025-06-22 19:48:02 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:02.381910 | orchestrator | 2025-06-22 19:48:02 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:02.383321 | orchestrator | 2025-06-22 19:48:02 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:02.383370 | orchestrator | 2025-06-22 19:48:02 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:48:02.383393 | orchestrator | 2025-06-22 19:48:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:05.459520 | orchestrator | 2025-06-22 19:48:05 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:05.463654 | orchestrator | 2025-06-22 19:48:05 | INFO  | Task bccf67b7-9eb4-48ef-960c-7309558780bb is in state STARTED 2025-06-22 19:48:05.463713 | orchestrator | 2025-06-22 19:48:05 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:05.463735 | orchestrator | 2025-06-22 19:48:05 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:05.466671 | orchestrator | 2025-06-22 19:48:05 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:05.466711 | orchestrator | 2025-06-22 19:48:05 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:05.468874 | orchestrator | 2025-06-22 19:48:05 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:48:05.468899 | orchestrator | 2025-06-22 19:48:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:08.552165 | orchestrator | 2025-06-22 19:48:08 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:08.554185 | orchestrator | 2025-06-22 19:48:08 | INFO  | Task bccf67b7-9eb4-48ef-960c-7309558780bb is in state STARTED 2025-06-22 19:48:08.556656 | orchestrator | 2025-06-22 19:48:08 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:08.558757 | orchestrator | 2025-06-22 19:48:08 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:08.559378 | orchestrator | 2025-06-22 19:48:08 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:08.561195 | orchestrator | 2025-06-22 19:48:08 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:08.562625 | orchestrator | 2025-06-22 19:48:08 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:48:08.562660 | orchestrator | 2025-06-22 19:48:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:11.615531 | orchestrator | 2025-06-22 19:48:11.615615 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-22 19:48:11.615631 | orchestrator | 2025-06-22 19:48:11.615643 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-22 19:48:11.615654 | orchestrator | Sunday 22 June 2025 19:47:51 +0000 (0:00:00.851) 0:00:00.851 *********** 2025-06-22 19:48:11.615665 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:11.615677 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:48:11.615688 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:48:11.615699 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:48:11.615710 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:48:11.615720 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:48:11.615731 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:48:11.615742 | orchestrator | 2025-06-22 19:48:11.615753 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-22 19:48:11.615764 | orchestrator | Sunday 22 June 2025 19:47:56 +0000 (0:00:04.789) 0:00:05.641 *********** 2025-06-22 19:48:11.615776 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-22 19:48:11.615787 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-22 19:48:11.615798 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-22 19:48:11.615809 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-22 19:48:11.615820 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-22 19:48:11.615831 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-22 19:48:11.615841 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-22 19:48:11.615892 | orchestrator | 2025-06-22 19:48:11.615913 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-22 19:48:11.615948 | orchestrator | Sunday 22 June 2025 19:47:58 +0000 (0:00:02.073) 0:00:07.714 *********** 2025-06-22 19:48:11.615963 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:47:57.279346', 'end': '2025-06-22 19:47:57.284068', 'delta': '0:00:00.004722', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:48:11.615978 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:47:57.213583', 'end': '2025-06-22 19:47:57.221759', 'delta': '0:00:00.008176', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:48:11.615990 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:47:57.268315', 'end': '2025-06-22 19:47:57.273080', 'delta': '0:00:00.004765', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:48:11.616028 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:47:57.317505', 'end': '2025-06-22 19:47:57.325323', 'delta': '0:00:00.007818', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:48:11.616046 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:47:57.435036', 'end': '2025-06-22 19:47:57.442840', 'delta': '0:00:00.007804', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:48:11.616091 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:47:57.653813', 'end': '2025-06-22 19:47:57.657560', 'delta': '0:00:00.003747', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:48:11.616105 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:47:58.126285', 'end': '2025-06-22 19:47:58.131831', 'delta': '0:00:00.005546', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:48:11.616130 | orchestrator | 2025-06-22 19:48:11.616153 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-22 19:48:11.616166 | orchestrator | Sunday 22 June 2025 19:48:01 +0000 (0:00:02.813) 0:00:10.529 *********** 2025-06-22 19:48:11.616178 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-22 19:48:11.616191 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-22 19:48:11.616202 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-22 19:48:11.616214 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-22 19:48:11.616226 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-22 19:48:11.616238 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-22 19:48:11.616250 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-22 19:48:11.616262 | orchestrator | 2025-06-22 19:48:11.616274 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-22 19:48:11.616287 | orchestrator | Sunday 22 June 2025 19:48:03 +0000 (0:00:02.432) 0:00:12.961 *********** 2025-06-22 19:48:11.616299 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-22 19:48:11.616311 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-22 19:48:11.616323 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-22 19:48:11.616335 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-22 19:48:11.616378 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-22 19:48:11.616390 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-22 19:48:11.616400 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-22 19:48:11.616411 | orchestrator | 2025-06-22 19:48:11.616422 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:48:11.616441 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:11.616461 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:11.616472 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:11.616483 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:11.616494 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:11.616505 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:11.616516 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:11.616526 | orchestrator | 2025-06-22 19:48:11.616537 | orchestrator | 2025-06-22 19:48:11.616548 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:48:11.616559 | orchestrator | Sunday 22 June 2025 19:48:08 +0000 (0:00:04.983) 0:00:17.944 *********** 2025-06-22 19:48:11.616570 | orchestrator | =============================================================================== 2025-06-22 19:48:11.616581 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.98s 2025-06-22 19:48:11.616592 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.79s 2025-06-22 19:48:11.616603 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.81s 2025-06-22 19:48:11.616613 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.43s 2025-06-22 19:48:11.616624 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.07s 2025-06-22 19:48:11.616667 | orchestrator | 2025-06-22 19:48:11 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:11.616944 | orchestrator | 2025-06-22 19:48:11 | INFO  | Task bccf67b7-9eb4-48ef-960c-7309558780bb is in state SUCCESS 2025-06-22 19:48:11.617051 | orchestrator | 2025-06-22 19:48:11 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:11.617068 | orchestrator | 2025-06-22 19:48:11 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:11.618154 | orchestrator | 2025-06-22 19:48:11 | INFO  | Task 5f6ec457-c9d7-4f55-8b18-927d06b232c7 is in state STARTED 2025-06-22 19:48:11.618594 | orchestrator | 2025-06-22 19:48:11 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:11.621288 | orchestrator | 2025-06-22 19:48:11 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:11.621620 | orchestrator | 2025-06-22 19:48:11 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:48:11.622134 | orchestrator | 2025-06-22 19:48:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:14.707187 | orchestrator | 2025-06-22 19:48:14 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:14.708386 | orchestrator | 2025-06-22 19:48:14 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:14.711265 | orchestrator | 2025-06-22 19:48:14 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:14.713261 | orchestrator | 2025-06-22 19:48:14 | INFO  | Task 5f6ec457-c9d7-4f55-8b18-927d06b232c7 is in state STARTED 2025-06-22 19:48:14.717826 | orchestrator | 2025-06-22 19:48:14 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:14.723195 | orchestrator | 2025-06-22 19:48:14 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:14.727427 | orchestrator | 2025-06-22 19:48:14 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:48:14.728908 | orchestrator | 2025-06-22 19:48:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:17.798821 | orchestrator | 2025-06-22 19:48:17 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:17.798903 | orchestrator | 2025-06-22 19:48:17 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:17.804688 | orchestrator | 2025-06-22 19:48:17 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:17.804735 | orchestrator | 2025-06-22 19:48:17 | INFO  | Task 5f6ec457-c9d7-4f55-8b18-927d06b232c7 is in state STARTED 2025-06-22 19:48:17.808573 | orchestrator | 2025-06-22 19:48:17 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:17.808605 | orchestrator | 2025-06-22 19:48:17 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:17.813439 | orchestrator | 2025-06-22 19:48:17 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:48:17.813482 | orchestrator | 2025-06-22 19:48:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:20.876411 | orchestrator | 2025-06-22 19:48:20 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:20.876498 | orchestrator | 2025-06-22 19:48:20 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:20.882684 | orchestrator | 2025-06-22 19:48:20 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:20.882727 | orchestrator | 2025-06-22 19:48:20 | INFO  | Task 5f6ec457-c9d7-4f55-8b18-927d06b232c7 is in state STARTED 2025-06-22 19:48:20.887249 | orchestrator | 2025-06-22 19:48:20 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:20.894960 | orchestrator | 2025-06-22 19:48:20 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:20.895032 | orchestrator | 2025-06-22 19:48:20 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:48:20.895047 | orchestrator | 2025-06-22 19:48:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:23.936734 | orchestrator | 2025-06-22 19:48:23 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:23.936909 | orchestrator | 2025-06-22 19:48:23 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:23.939276 | orchestrator | 2025-06-22 19:48:23 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:23.940197 | orchestrator | 2025-06-22 19:48:23 | INFO  | Task 5f6ec457-c9d7-4f55-8b18-927d06b232c7 is in state STARTED 2025-06-22 19:48:23.943579 | orchestrator | 2025-06-22 19:48:23 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:23.948344 | orchestrator | 2025-06-22 19:48:23 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:23.948818 | orchestrator | 2025-06-22 19:48:23 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:48:23.948838 | orchestrator | 2025-06-22 19:48:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:27.011797 | orchestrator | 2025-06-22 19:48:27 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:27.015495 | orchestrator | 2025-06-22 19:48:27 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:27.022162 | orchestrator | 2025-06-22 19:48:27 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:27.026156 | orchestrator | 2025-06-22 19:48:27 | INFO  | Task 5f6ec457-c9d7-4f55-8b18-927d06b232c7 is in state STARTED 2025-06-22 19:48:27.032736 | orchestrator | 2025-06-22 19:48:27 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:27.038921 | orchestrator | 2025-06-22 19:48:27 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:27.049767 | orchestrator | 2025-06-22 19:48:27 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state STARTED 2025-06-22 19:48:27.049808 | orchestrator | 2025-06-22 19:48:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:30.108077 | orchestrator | 2025-06-22 19:48:30 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:30.110997 | orchestrator | 2025-06-22 19:48:30 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:30.111035 | orchestrator | 2025-06-22 19:48:30 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:30.111047 | orchestrator | 2025-06-22 19:48:30 | INFO  | Task 5f6ec457-c9d7-4f55-8b18-927d06b232c7 is in state STARTED 2025-06-22 19:48:30.111058 | orchestrator | 2025-06-22 19:48:30 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:30.111069 | orchestrator | 2025-06-22 19:48:30 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:30.111080 | orchestrator | 2025-06-22 19:48:30 | INFO  | Task 07746516-3c99-4123-8c23-31b49685e30d is in state SUCCESS 2025-06-22 19:48:30.111092 | orchestrator | 2025-06-22 19:48:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:33.163157 | orchestrator | 2025-06-22 19:48:33 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:33.163256 | orchestrator | 2025-06-22 19:48:33 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:33.168755 | orchestrator | 2025-06-22 19:48:33 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:33.168816 | orchestrator | 2025-06-22 19:48:33 | INFO  | Task 5f6ec457-c9d7-4f55-8b18-927d06b232c7 is in state STARTED 2025-06-22 19:48:33.168834 | orchestrator | 2025-06-22 19:48:33 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:33.168851 | orchestrator | 2025-06-22 19:48:33 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:33.168870 | orchestrator | 2025-06-22 19:48:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:36.211792 | orchestrator | 2025-06-22 19:48:36 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:36.211879 | orchestrator | 2025-06-22 19:48:36 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:36.212727 | orchestrator | 2025-06-22 19:48:36 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:36.215303 | orchestrator | 2025-06-22 19:48:36 | INFO  | Task 5f6ec457-c9d7-4f55-8b18-927d06b232c7 is in state STARTED 2025-06-22 19:48:36.215959 | orchestrator | 2025-06-22 19:48:36 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:36.217748 | orchestrator | 2025-06-22 19:48:36 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:36.220724 | orchestrator | 2025-06-22 19:48:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:39.258081 | orchestrator | 2025-06-22 19:48:39 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:39.258195 | orchestrator | 2025-06-22 19:48:39 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:39.258209 | orchestrator | 2025-06-22 19:48:39 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:39.258635 | orchestrator | 2025-06-22 19:48:39 | INFO  | Task 5f6ec457-c9d7-4f55-8b18-927d06b232c7 is in state SUCCESS 2025-06-22 19:48:39.259050 | orchestrator | 2025-06-22 19:48:39 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:39.259606 | orchestrator | 2025-06-22 19:48:39 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:39.259707 | orchestrator | 2025-06-22 19:48:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:42.293021 | orchestrator | 2025-06-22 19:48:42 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state STARTED 2025-06-22 19:48:42.293659 | orchestrator | 2025-06-22 19:48:42 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:42.293846 | orchestrator | 2025-06-22 19:48:42 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:42.295151 | orchestrator | 2025-06-22 19:48:42 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:42.295810 | orchestrator | 2025-06-22 19:48:42 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:42.295882 | orchestrator | 2025-06-22 19:48:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:45.347033 | orchestrator | 2025-06-22 19:48:45.347155 | orchestrator | 2025-06-22 19:48:45.347173 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-22 19:48:45.347185 | orchestrator | 2025-06-22 19:48:45.347234 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-22 19:48:45.347246 | orchestrator | Sunday 22 June 2025 19:47:51 +0000 (0:00:00.701) 0:00:00.701 *********** 2025-06-22 19:48:45.347257 | orchestrator | ok: [testbed-manager] => { 2025-06-22 19:48:45.347269 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-22 19:48:45.347282 | orchestrator | } 2025-06-22 19:48:45.347293 | orchestrator | 2025-06-22 19:48:45.347304 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-22 19:48:45.347315 | orchestrator | Sunday 22 June 2025 19:47:51 +0000 (0:00:00.537) 0:00:01.238 *********** 2025-06-22 19:48:45.347325 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:45.347337 | orchestrator | 2025-06-22 19:48:45.347347 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-22 19:48:45.347383 | orchestrator | Sunday 22 June 2025 19:47:53 +0000 (0:00:02.075) 0:00:03.313 *********** 2025-06-22 19:48:45.347395 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-22 19:48:45.347406 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-22 19:48:45.347417 | orchestrator | 2025-06-22 19:48:45.347428 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-22 19:48:45.347439 | orchestrator | Sunday 22 June 2025 19:47:55 +0000 (0:00:01.560) 0:00:04.873 *********** 2025-06-22 19:48:45.347450 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:45.347461 | orchestrator | 2025-06-22 19:48:45.347471 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-22 19:48:45.347495 | orchestrator | Sunday 22 June 2025 19:47:58 +0000 (0:00:02.644) 0:00:07.518 *********** 2025-06-22 19:48:45.347507 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:45.347539 | orchestrator | 2025-06-22 19:48:45.347551 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-22 19:48:45.347563 | orchestrator | Sunday 22 June 2025 19:48:00 +0000 (0:00:01.899) 0:00:09.417 *********** 2025-06-22 19:48:45.347574 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-22 19:48:45.347585 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:45.347598 | orchestrator | 2025-06-22 19:48:45.347610 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-22 19:48:45.347623 | orchestrator | Sunday 22 June 2025 19:48:24 +0000 (0:00:24.862) 0:00:34.279 *********** 2025-06-22 19:48:45.347635 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:45.347647 | orchestrator | 2025-06-22 19:48:45.347658 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:48:45.347672 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:45.347685 | orchestrator | 2025-06-22 19:48:45.347697 | orchestrator | 2025-06-22 19:48:45.347709 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:48:45.347721 | orchestrator | Sunday 22 June 2025 19:48:26 +0000 (0:00:01.875) 0:00:36.154 *********** 2025-06-22 19:48:45.347733 | orchestrator | =============================================================================== 2025-06-22 19:48:45.347745 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.86s 2025-06-22 19:48:45.347758 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.64s 2025-06-22 19:48:45.347770 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.08s 2025-06-22 19:48:45.347782 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.90s 2025-06-22 19:48:45.347794 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.88s 2025-06-22 19:48:45.347806 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.56s 2025-06-22 19:48:45.347818 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.54s 2025-06-22 19:48:45.347830 | orchestrator | 2025-06-22 19:48:45.347841 | orchestrator | 2025-06-22 19:48:45.347852 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-22 19:48:45.347863 | orchestrator | 2025-06-22 19:48:45.347874 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-22 19:48:45.347885 | orchestrator | Sunday 22 June 2025 19:48:15 +0000 (0:00:00.265) 0:00:00.265 *********** 2025-06-22 19:48:45.347895 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:45.347906 | orchestrator | 2025-06-22 19:48:45.347917 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-22 19:48:45.347928 | orchestrator | Sunday 22 June 2025 19:48:17 +0000 (0:00:01.580) 0:00:01.846 *********** 2025-06-22 19:48:45.347939 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-22 19:48:45.347950 | orchestrator | 2025-06-22 19:48:45.347961 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-22 19:48:45.347972 | orchestrator | Sunday 22 June 2025 19:48:18 +0000 (0:00:01.112) 0:00:02.958 *********** 2025-06-22 19:48:45.347983 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:45.347994 | orchestrator | 2025-06-22 19:48:45.348004 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-22 19:48:45.348015 | orchestrator | Sunday 22 June 2025 19:48:19 +0000 (0:00:01.661) 0:00:04.620 *********** 2025-06-22 19:48:45.348028 | orchestrator | fatal: [testbed-manager]: FAILED! => {"msg": "The conditional check 'result[\"status\"][\"ActiveState\"] == \"active\"' failed. The error was: error while evaluating conditional (result[\"status\"][\"ActiveState\"] == \"active\"): 'dict object' has no attribute 'status'"} 2025-06-22 19:48:45.348046 | orchestrator | 2025-06-22 19:48:45.348072 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:48:45.348084 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-06-22 19:48:45.348103 | orchestrator | 2025-06-22 19:48:45.348174 | orchestrator | 2025-06-22 19:48:45.348188 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:48:45.348199 | orchestrator | Sunday 22 June 2025 19:48:37 +0000 (0:00:17.749) 0:00:22.369 *********** 2025-06-22 19:48:45.348210 | orchestrator | =============================================================================== 2025-06-22 19:48:45.348220 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 17.75s 2025-06-22 19:48:45.348231 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.66s 2025-06-22 19:48:45.348242 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.58s 2025-06-22 19:48:45.348253 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.11s 2025-06-22 19:48:45.348264 | orchestrator | 2025-06-22 19:48:45.348275 | orchestrator | 2025-06-22 19:48:45.348286 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-22 19:48:45.348297 | orchestrator | 2025-06-22 19:48:45.348308 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-22 19:48:45.348318 | orchestrator | Sunday 22 June 2025 19:47:50 +0000 (0:00:00.462) 0:00:00.462 *********** 2025-06-22 19:48:45.348329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-22 19:48:45.348341 | orchestrator | 2025-06-22 19:48:45.348352 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-22 19:48:45.348369 | orchestrator | Sunday 22 June 2025 19:47:50 +0000 (0:00:00.533) 0:00:00.996 *********** 2025-06-22 19:48:45.348380 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-22 19:48:45.348390 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-22 19:48:45.348401 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-22 19:48:45.348412 | orchestrator | 2025-06-22 19:48:45.348423 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-22 19:48:45.348434 | orchestrator | Sunday 22 June 2025 19:47:52 +0000 (0:00:02.227) 0:00:03.223 *********** 2025-06-22 19:48:45.348444 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:45.348455 | orchestrator | 2025-06-22 19:48:45.348466 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-22 19:48:45.348477 | orchestrator | Sunday 22 June 2025 19:47:55 +0000 (0:00:02.072) 0:00:05.296 *********** 2025-06-22 19:48:45.348488 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-22 19:48:45.348498 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:45.348509 | orchestrator | 2025-06-22 19:48:45.348520 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-22 19:48:45.348531 | orchestrator | Sunday 22 June 2025 19:48:35 +0000 (0:00:40.509) 0:00:45.806 *********** 2025-06-22 19:48:45.348542 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:45.348553 | orchestrator | 2025-06-22 19:48:45.348564 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-22 19:48:45.348574 | orchestrator | Sunday 22 June 2025 19:48:37 +0000 (0:00:01.527) 0:00:47.334 *********** 2025-06-22 19:48:45.348585 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:45.348596 | orchestrator | 2025-06-22 19:48:45.348607 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-22 19:48:45.348618 | orchestrator | Sunday 22 June 2025 19:48:38 +0000 (0:00:01.607) 0:00:48.942 *********** 2025-06-22 19:48:45.348629 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:45.348640 | orchestrator | 2025-06-22 19:48:45.348651 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-22 19:48:45.348662 | orchestrator | Sunday 22 June 2025 19:48:40 +0000 (0:00:01.490) 0:00:50.432 *********** 2025-06-22 19:48:45.348679 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:45.348690 | orchestrator | 2025-06-22 19:48:45.348701 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-22 19:48:45.348712 | orchestrator | Sunday 22 June 2025 19:48:40 +0000 (0:00:00.689) 0:00:51.121 *********** 2025-06-22 19:48:45.348723 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:45.348734 | orchestrator | 2025-06-22 19:48:45.348745 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-22 19:48:45.348756 | orchestrator | Sunday 22 June 2025 19:48:41 +0000 (0:00:00.788) 0:00:51.910 *********** 2025-06-22 19:48:45.348766 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:45.348782 | orchestrator | 2025-06-22 19:48:45.348800 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:48:45.348811 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:45.348822 | orchestrator | 2025-06-22 19:48:45.348833 | orchestrator | 2025-06-22 19:48:45.348843 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:48:45.348854 | orchestrator | Sunday 22 June 2025 19:48:41 +0000 (0:00:00.318) 0:00:52.229 *********** 2025-06-22 19:48:45.348865 | orchestrator | =============================================================================== 2025-06-22 19:48:45.348875 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 40.51s 2025-06-22 19:48:45.348886 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.23s 2025-06-22 19:48:45.348896 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.07s 2025-06-22 19:48:45.348907 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.61s 2025-06-22 19:48:45.348925 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.53s 2025-06-22 19:48:45.348936 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.49s 2025-06-22 19:48:45.348947 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.79s 2025-06-22 19:48:45.348958 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.69s 2025-06-22 19:48:45.348968 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.53s 2025-06-22 19:48:45.348979 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.32s 2025-06-22 19:48:45.349020 | orchestrator | 2025-06-22 19:48:45 | INFO  | Task ed1ade70-e17c-44a2-85f3-3742aa9c70df is in state SUCCESS 2025-06-22 19:48:45.349179 | orchestrator | 2025-06-22 19:48:45 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:45.349199 | orchestrator | 2025-06-22 19:48:45 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:45.355596 | orchestrator | 2025-06-22 19:48:45 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:45.355633 | orchestrator | 2025-06-22 19:48:45 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:45.355645 | orchestrator | 2025-06-22 19:48:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:48.395000 | orchestrator | 2025-06-22 19:48:48 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:48.398786 | orchestrator | 2025-06-22 19:48:48 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:48.403859 | orchestrator | 2025-06-22 19:48:48 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:48.406323 | orchestrator | 2025-06-22 19:48:48 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:48.406709 | orchestrator | 2025-06-22 19:48:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:51.440946 | orchestrator | 2025-06-22 19:48:51 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:51.441479 | orchestrator | 2025-06-22 19:48:51 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state STARTED 2025-06-22 19:48:51.442812 | orchestrator | 2025-06-22 19:48:51 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:51.443566 | orchestrator | 2025-06-22 19:48:51 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:51.443587 | orchestrator | 2025-06-22 19:48:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:54.499570 | orchestrator | 2025-06-22 19:48:54 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:54.499928 | orchestrator | 2025-06-22 19:48:54 | INFO  | Task 8935f10c-2952-46df-bde7-e1ef5bc2616c is in state SUCCESS 2025-06-22 19:48:54.501423 | orchestrator | 2025-06-22 19:48:54.501469 | orchestrator | 2025-06-22 19:48:54.501482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:48:54.501494 | orchestrator | 2025-06-22 19:48:54.501505 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:48:54.501517 | orchestrator | Sunday 22 June 2025 19:47:51 +0000 (0:00:00.717) 0:00:00.717 *********** 2025-06-22 19:48:54.501528 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-22 19:48:54.501539 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-22 19:48:54.501550 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-22 19:48:54.501561 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-22 19:48:54.501572 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-22 19:48:54.501582 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-22 19:48:54.501593 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-22 19:48:54.501604 | orchestrator | 2025-06-22 19:48:54.501614 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-22 19:48:54.501625 | orchestrator | 2025-06-22 19:48:54.501636 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-22 19:48:54.501647 | orchestrator | Sunday 22 June 2025 19:47:54 +0000 (0:00:02.832) 0:00:03.549 *********** 2025-06-22 19:48:54.501672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:48:54.501686 | orchestrator | 2025-06-22 19:48:54.501697 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-22 19:48:54.501708 | orchestrator | Sunday 22 June 2025 19:47:57 +0000 (0:00:03.141) 0:00:06.691 *********** 2025-06-22 19:48:54.501719 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:54.501731 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:48:54.501742 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:48:54.501753 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:48:54.501763 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:54.501774 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:54.501785 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:48:54.501795 | orchestrator | 2025-06-22 19:48:54.501806 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-22 19:48:54.501817 | orchestrator | Sunday 22 June 2025 19:47:59 +0000 (0:00:02.501) 0:00:09.192 *********** 2025-06-22 19:48:54.501831 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:48:54.501850 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:48:54.501868 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:54.501880 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:48:54.501890 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:54.501901 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:54.501933 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:48:54.501944 | orchestrator | 2025-06-22 19:48:54.501955 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-22 19:48:54.501966 | orchestrator | Sunday 22 June 2025 19:48:03 +0000 (0:00:03.163) 0:00:12.356 *********** 2025-06-22 19:48:54.501977 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:54.501988 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:48:54.501998 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:48:54.502010 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:48:54.502118 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:48:54.502168 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:48:54.502187 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:48:54.502256 | orchestrator | 2025-06-22 19:48:54.502269 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-22 19:48:54.502280 | orchestrator | Sunday 22 June 2025 19:48:06 +0000 (0:00:03.315) 0:00:15.671 *********** 2025-06-22 19:48:54.502291 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:54.502301 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:48:54.502312 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:48:54.502323 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:48:54.502333 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:48:54.502344 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:48:54.502358 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:48:54.502447 | orchestrator | 2025-06-22 19:48:54.502460 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-22 19:48:54.502471 | orchestrator | Sunday 22 June 2025 19:48:15 +0000 (0:00:09.078) 0:00:24.750 *********** 2025-06-22 19:48:54.502520 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:48:54.502533 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:48:54.502544 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:48:54.502556 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:48:54.502567 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:48:54.502578 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:48:54.502589 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:54.502600 | orchestrator | 2025-06-22 19:48:54.502612 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-22 19:48:54.502623 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:16.312) 0:00:41.062 *********** 2025-06-22 19:48:54.502636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:48:54.502650 | orchestrator | 2025-06-22 19:48:54.502661 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-22 19:48:54.502672 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:01.655) 0:00:42.718 *********** 2025-06-22 19:48:54.502683 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-22 19:48:54.502694 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-22 19:48:54.502706 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-22 19:48:54.502717 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-22 19:48:54.502746 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-22 19:48:54.502758 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-22 19:48:54.502770 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-22 19:48:54.502781 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-22 19:48:54.502792 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-22 19:48:54.502803 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-22 19:48:54.502814 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-22 19:48:54.502825 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-22 19:48:54.502835 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-22 19:48:54.502859 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-22 19:48:54.502871 | orchestrator | 2025-06-22 19:48:54.502882 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-22 19:48:54.502893 | orchestrator | Sunday 22 June 2025 19:48:39 +0000 (0:00:06.296) 0:00:49.014 *********** 2025-06-22 19:48:54.502904 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:54.502915 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:48:54.502927 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:48:54.502938 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:48:54.502949 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:54.502959 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:54.502970 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:48:54.502981 | orchestrator | 2025-06-22 19:48:54.502992 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-22 19:48:54.503003 | orchestrator | Sunday 22 June 2025 19:48:40 +0000 (0:00:01.204) 0:00:50.219 *********** 2025-06-22 19:48:54.503014 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:54.503025 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:48:54.503036 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:48:54.503105 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:48:54.503119 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:48:54.503162 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:48:54.503183 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:48:54.503202 | orchestrator | 2025-06-22 19:48:54.503222 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-22 19:48:54.503240 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:01.734) 0:00:51.953 *********** 2025-06-22 19:48:54.503256 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:48:54.503267 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:54.503277 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:48:54.503288 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:48:54.503299 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:54.503310 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:54.503321 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:48:54.503332 | orchestrator | 2025-06-22 19:48:54.503343 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-22 19:48:54.503354 | orchestrator | Sunday 22 June 2025 19:48:44 +0000 (0:00:01.521) 0:00:53.475 *********** 2025-06-22 19:48:54.503365 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:54.503375 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:48:54.503386 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:48:54.503397 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:48:54.503408 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:54.503418 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:54.503429 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:48:54.503440 | orchestrator | 2025-06-22 19:48:54.503450 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-22 19:48:54.503461 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:02.077) 0:00:55.552 *********** 2025-06-22 19:48:54.503472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-22 19:48:54.503486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:48:54.503498 | orchestrator | 2025-06-22 19:48:54.503509 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-22 19:48:54.503526 | orchestrator | Sunday 22 June 2025 19:48:47 +0000 (0:00:01.038) 0:00:56.591 *********** 2025-06-22 19:48:54.503538 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:54.503549 | orchestrator | 2025-06-22 19:48:54.503559 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-22 19:48:54.503570 | orchestrator | Sunday 22 June 2025 19:48:49 +0000 (0:00:01.775) 0:00:58.366 *********** 2025-06-22 19:48:54.503590 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:48:54.503600 | orchestrator | changed: [testbed-manager] 2025-06-22 19:48:54.503611 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:48:54.503622 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:48:54.503632 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:48:54.503643 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:48:54.503654 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:48:54.503665 | orchestrator | 2025-06-22 19:48:54.503676 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:48:54.503687 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:54.503699 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:54.503710 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:54.503721 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:54.503743 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:54.503754 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:54.503765 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:48:54.503776 | orchestrator | 2025-06-22 19:48:54.503787 | orchestrator | 2025-06-22 19:48:54.503797 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:48:54.503811 | orchestrator | Sunday 22 June 2025 19:48:52 +0000 (0:00:03.398) 0:01:01.765 *********** 2025-06-22 19:48:54.503829 | orchestrator | =============================================================================== 2025-06-22 19:48:54.503846 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.31s 2025-06-22 19:48:54.503857 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.08s 2025-06-22 19:48:54.503868 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.30s 2025-06-22 19:48:54.503879 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.40s 2025-06-22 19:48:54.503889 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.32s 2025-06-22 19:48:54.503900 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.16s 2025-06-22 19:48:54.503911 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.14s 2025-06-22 19:48:54.503922 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.83s 2025-06-22 19:48:54.503933 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.50s 2025-06-22 19:48:54.503943 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.08s 2025-06-22 19:48:54.503954 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.78s 2025-06-22 19:48:54.503965 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.73s 2025-06-22 19:48:54.503975 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.66s 2025-06-22 19:48:54.503986 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.52s 2025-06-22 19:48:54.503996 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.20s 2025-06-22 19:48:54.504007 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.04s 2025-06-22 19:48:54.504018 | orchestrator | 2025-06-22 19:48:54 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:54.504318 | orchestrator | 2025-06-22 19:48:54 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:54.504370 | orchestrator | 2025-06-22 19:48:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:48:57.552774 | orchestrator | 2025-06-22 19:48:57 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:48:57.552876 | orchestrator | 2025-06-22 19:48:57 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:48:57.558641 | orchestrator | 2025-06-22 19:48:57 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:48:57.558696 | orchestrator | 2025-06-22 19:48:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:00.595570 | orchestrator | 2025-06-22 19:49:00 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:00.596652 | orchestrator | 2025-06-22 19:49:00 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:00.597679 | orchestrator | 2025-06-22 19:49:00 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:00.598177 | orchestrator | 2025-06-22 19:49:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:03.640996 | orchestrator | 2025-06-22 19:49:03 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:03.642441 | orchestrator | 2025-06-22 19:49:03 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:03.644773 | orchestrator | 2025-06-22 19:49:03 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:03.644887 | orchestrator | 2025-06-22 19:49:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:06.687092 | orchestrator | 2025-06-22 19:49:06 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:06.691211 | orchestrator | 2025-06-22 19:49:06 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:06.695054 | orchestrator | 2025-06-22 19:49:06 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:06.695111 | orchestrator | 2025-06-22 19:49:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:09.735216 | orchestrator | 2025-06-22 19:49:09 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:09.736730 | orchestrator | 2025-06-22 19:49:09 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:09.738681 | orchestrator | 2025-06-22 19:49:09 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:09.738713 | orchestrator | 2025-06-22 19:49:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:12.783131 | orchestrator | 2025-06-22 19:49:12 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:12.784012 | orchestrator | 2025-06-22 19:49:12 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:12.784960 | orchestrator | 2025-06-22 19:49:12 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:12.784995 | orchestrator | 2025-06-22 19:49:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:15.839713 | orchestrator | 2025-06-22 19:49:15 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:15.841893 | orchestrator | 2025-06-22 19:49:15 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:15.843632 | orchestrator | 2025-06-22 19:49:15 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:15.843711 | orchestrator | 2025-06-22 19:49:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:18.906688 | orchestrator | 2025-06-22 19:49:18 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:18.909127 | orchestrator | 2025-06-22 19:49:18 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:18.909771 | orchestrator | 2025-06-22 19:49:18 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:18.909817 | orchestrator | 2025-06-22 19:49:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:21.968243 | orchestrator | 2025-06-22 19:49:21 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:21.973293 | orchestrator | 2025-06-22 19:49:21 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:21.978354 | orchestrator | 2025-06-22 19:49:21 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:21.978389 | orchestrator | 2025-06-22 19:49:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:25.042122 | orchestrator | 2025-06-22 19:49:25 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:25.043135 | orchestrator | 2025-06-22 19:49:25 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:25.044716 | orchestrator | 2025-06-22 19:49:25 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:25.044759 | orchestrator | 2025-06-22 19:49:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:28.103679 | orchestrator | 2025-06-22 19:49:28 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:28.105914 | orchestrator | 2025-06-22 19:49:28 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:28.107517 | orchestrator | 2025-06-22 19:49:28 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:28.107554 | orchestrator | 2025-06-22 19:49:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:31.158287 | orchestrator | 2025-06-22 19:49:31 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:31.160611 | orchestrator | 2025-06-22 19:49:31 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:31.161606 | orchestrator | 2025-06-22 19:49:31 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:31.162648 | orchestrator | 2025-06-22 19:49:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:34.230544 | orchestrator | 2025-06-22 19:49:34 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:34.231379 | orchestrator | 2025-06-22 19:49:34 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:34.232513 | orchestrator | 2025-06-22 19:49:34 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:34.232542 | orchestrator | 2025-06-22 19:49:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:37.286933 | orchestrator | 2025-06-22 19:49:37 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:37.288388 | orchestrator | 2025-06-22 19:49:37 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:37.289848 | orchestrator | 2025-06-22 19:49:37 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:37.289925 | orchestrator | 2025-06-22 19:49:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:40.337323 | orchestrator | 2025-06-22 19:49:40 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:40.337430 | orchestrator | 2025-06-22 19:49:40 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:40.337447 | orchestrator | 2025-06-22 19:49:40 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:40.337733 | orchestrator | 2025-06-22 19:49:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:43.384246 | orchestrator | 2025-06-22 19:49:43 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:43.387541 | orchestrator | 2025-06-22 19:49:43 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:43.392519 | orchestrator | 2025-06-22 19:49:43 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:43.393352 | orchestrator | 2025-06-22 19:49:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:46.465918 | orchestrator | 2025-06-22 19:49:46 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:46.470070 | orchestrator | 2025-06-22 19:49:46 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:46.472592 | orchestrator | 2025-06-22 19:49:46 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:46.472675 | orchestrator | 2025-06-22 19:49:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:49.527254 | orchestrator | 2025-06-22 19:49:49 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:49.529615 | orchestrator | 2025-06-22 19:49:49 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:49.531906 | orchestrator | 2025-06-22 19:49:49 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:49.532253 | orchestrator | 2025-06-22 19:49:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:52.580966 | orchestrator | 2025-06-22 19:49:52 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:52.581765 | orchestrator | 2025-06-22 19:49:52 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:52.582839 | orchestrator | 2025-06-22 19:49:52 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:52.582867 | orchestrator | 2025-06-22 19:49:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:55.633338 | orchestrator | 2025-06-22 19:49:55 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:55.633433 | orchestrator | 2025-06-22 19:49:55 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:55.635944 | orchestrator | 2025-06-22 19:49:55 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:55.635987 | orchestrator | 2025-06-22 19:49:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:58.680950 | orchestrator | 2025-06-22 19:49:58 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:49:58.681051 | orchestrator | 2025-06-22 19:49:58 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:49:58.682163 | orchestrator | 2025-06-22 19:49:58 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:49:58.682220 | orchestrator | 2025-06-22 19:49:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:01.720639 | orchestrator | 2025-06-22 19:50:01 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:01.723230 | orchestrator | 2025-06-22 19:50:01 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:01.724700 | orchestrator | 2025-06-22 19:50:01 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:50:01.724724 | orchestrator | 2025-06-22 19:50:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:04.771611 | orchestrator | 2025-06-22 19:50:04 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:04.773630 | orchestrator | 2025-06-22 19:50:04 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:04.777002 | orchestrator | 2025-06-22 19:50:04 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:50:04.777482 | orchestrator | 2025-06-22 19:50:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:07.815438 | orchestrator | 2025-06-22 19:50:07 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:07.817530 | orchestrator | 2025-06-22 19:50:07 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:07.820064 | orchestrator | 2025-06-22 19:50:07 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:50:07.820124 | orchestrator | 2025-06-22 19:50:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:10.864434 | orchestrator | 2025-06-22 19:50:10 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:10.865109 | orchestrator | 2025-06-22 19:50:10 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:10.867269 | orchestrator | 2025-06-22 19:50:10 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:50:10.867359 | orchestrator | 2025-06-22 19:50:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:13.911999 | orchestrator | 2025-06-22 19:50:13 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:13.913779 | orchestrator | 2025-06-22 19:50:13 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:13.916835 | orchestrator | 2025-06-22 19:50:13 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:50:13.916882 | orchestrator | 2025-06-22 19:50:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:16.961105 | orchestrator | 2025-06-22 19:50:16 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:16.961815 | orchestrator | 2025-06-22 19:50:16 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:16.963483 | orchestrator | 2025-06-22 19:50:16 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:50:16.963584 | orchestrator | 2025-06-22 19:50:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:19.997670 | orchestrator | 2025-06-22 19:50:19 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:19.998695 | orchestrator | 2025-06-22 19:50:19 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:19.999336 | orchestrator | 2025-06-22 19:50:19 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:50:19.999514 | orchestrator | 2025-06-22 19:50:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:23.053859 | orchestrator | 2025-06-22 19:50:23 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:23.054316 | orchestrator | 2025-06-22 19:50:23 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:23.055945 | orchestrator | 2025-06-22 19:50:23 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state STARTED 2025-06-22 19:50:23.056195 | orchestrator | 2025-06-22 19:50:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:26.086070 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task cee98f1a-8eb2-4bea-b0de-5e500bf2c0a2 is in state STARTED 2025-06-22 19:50:26.086159 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task cbddede1-51f2-4842-a4a4-2a1c216a27d5 is in state STARTED 2025-06-22 19:50:26.086174 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:26.086186 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:50:26.086197 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:50:26.086230 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:26.097570 | orchestrator | 2025-06-22 19:50:26.097656 | orchestrator | 2025-06-22 19:50:26.097671 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-22 19:50:26.097683 | orchestrator | 2025-06-22 19:50:26.097695 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-22 19:50:26.097745 | orchestrator | Sunday 22 June 2025 19:47:43 +0000 (0:00:00.253) 0:00:00.253 *********** 2025-06-22 19:50:26.097760 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:50:26.097772 | orchestrator | 2025-06-22 19:50:26.097783 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-22 19:50:26.097794 | orchestrator | Sunday 22 June 2025 19:47:44 +0000 (0:00:01.229) 0:00:01.483 *********** 2025-06-22 19:50:26.097805 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:50:26.097816 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:50:26.097827 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:50:26.097838 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:50:26.097849 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:50:26.097860 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:50:26.097870 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:50:26.097881 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:50:26.097892 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:50:26.097903 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:50:26.097914 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:50:26.097925 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:50:26.097936 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:50:26.097947 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:50:26.097958 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:50:26.098266 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:50:26.098300 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:50:26.098313 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:50:26.098325 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:50:26.098338 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:50:26.098350 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:50:26.098364 | orchestrator | 2025-06-22 19:50:26.098377 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-22 19:50:26.098389 | orchestrator | Sunday 22 June 2025 19:47:49 +0000 (0:00:04.394) 0:00:05.878 *********** 2025-06-22 19:50:26.098403 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:50:26.098417 | orchestrator | 2025-06-22 19:50:26.098429 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-22 19:50:26.098442 | orchestrator | Sunday 22 June 2025 19:47:51 +0000 (0:00:01.613) 0:00:07.491 *********** 2025-06-22 19:50:26.098457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.098475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.098505 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.098527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.098539 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.098611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098654 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.098733 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098815 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.098854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.098932 | orchestrator | 2025-06-22 19:50:26.098944 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-22 19:50:26.098955 | orchestrator | Sunday 22 June 2025 19:47:56 +0000 (0:00:05.951) 0:00:13.442 *********** 2025-06-22 19:50:26.098980 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.098996 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099008 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099020 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:50:26.099040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099081 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:50:26.099093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099127 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:50:26.099143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099190 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:50:26.099258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099296 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:50:26.099307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099346 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:26.099363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099404 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:26.099415 | orchestrator | 2025-06-22 19:50:26.099426 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-22 19:50:26.099438 | orchestrator | Sunday 22 June 2025 19:47:58 +0000 (0:00:01.871) 0:00:15.313 *********** 2025-06-22 19:50:26.099449 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099461 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099473 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099492 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:50:26.099504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099550 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:50:26.099561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099650 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:50:26.099670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099705 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:50:26.099716 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:50:26.099732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099771 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:26.099782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:50:26.099803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.099824 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:26.099833 | orchestrator | 2025-06-22 19:50:26.099843 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-22 19:50:26.099853 | orchestrator | Sunday 22 June 2025 19:48:01 +0000 (0:00:02.356) 0:00:17.670 *********** 2025-06-22 19:50:26.099862 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:50:26.099883 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:50:26.099893 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:50:26.099903 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:50:26.099913 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:50:26.099922 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:26.099932 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:26.099941 | orchestrator | 2025-06-22 19:50:26.099951 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-22 19:50:26.099961 | orchestrator | Sunday 22 June 2025 19:48:02 +0000 (0:00:01.342) 0:00:19.013 *********** 2025-06-22 19:50:26.099971 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:50:26.099980 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:50:26.099990 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:50:26.099999 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:50:26.100009 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:50:26.100019 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:26.100028 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:26.100038 | orchestrator | 2025-06-22 19:50:26.100048 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-22 19:50:26.100058 | orchestrator | Sunday 22 June 2025 19:48:03 +0000 (0:00:01.261) 0:00:20.274 *********** 2025-06-22 19:50:26.100067 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.100078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.100097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.100108 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.100135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.100146 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.100156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.100166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100197 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100312 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100345 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.100354 | orchestrator | 2025-06-22 19:50:26.100364 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-22 19:50:26.100374 | orchestrator | Sunday 22 June 2025 19:48:09 +0000 (0:00:05.630) 0:00:25.905 *********** 2025-06-22 19:50:26.100384 | orchestrator | [WARNING]: Skipped 2025-06-22 19:50:26.100394 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-22 19:50:26.100404 | orchestrator | to this access issue: 2025-06-22 19:50:26.100414 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-22 19:50:26.100423 | orchestrator | directory 2025-06-22 19:50:26.100434 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:50:26.100444 | orchestrator | 2025-06-22 19:50:26.100453 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-22 19:50:26.100463 | orchestrator | Sunday 22 June 2025 19:48:11 +0000 (0:00:02.240) 0:00:28.145 *********** 2025-06-22 19:50:26.100472 | orchestrator | [WARNING]: Skipped 2025-06-22 19:50:26.100482 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-22 19:50:26.100492 | orchestrator | to this access issue: 2025-06-22 19:50:26.100501 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-22 19:50:26.100511 | orchestrator | directory 2025-06-22 19:50:26.100520 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:50:26.100536 | orchestrator | 2025-06-22 19:50:26.100546 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-22 19:50:26.100556 | orchestrator | Sunday 22 June 2025 19:48:12 +0000 (0:00:01.088) 0:00:29.233 *********** 2025-06-22 19:50:26.100565 | orchestrator | [WARNING]: Skipped 2025-06-22 19:50:26.100575 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-22 19:50:26.100584 | orchestrator | to this access issue: 2025-06-22 19:50:26.100594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-22 19:50:26.100603 | orchestrator | directory 2025-06-22 19:50:26.100613 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:50:26.100622 | orchestrator | 2025-06-22 19:50:26.100632 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-22 19:50:26.100642 | orchestrator | Sunday 22 June 2025 19:48:13 +0000 (0:00:00.867) 0:00:30.101 *********** 2025-06-22 19:50:26.100651 | orchestrator | [WARNING]: Skipped 2025-06-22 19:50:26.100661 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-22 19:50:26.100670 | orchestrator | to this access issue: 2025-06-22 19:50:26.100680 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-22 19:50:26.100689 | orchestrator | directory 2025-06-22 19:50:26.100699 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:50:26.100708 | orchestrator | 2025-06-22 19:50:26.100718 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-22 19:50:26.100728 | orchestrator | Sunday 22 June 2025 19:48:14 +0000 (0:00:01.038) 0:00:31.139 *********** 2025-06-22 19:50:26.100737 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:50:26.100747 | orchestrator | changed: [testbed-manager] 2025-06-22 19:50:26.100757 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:50:26.100766 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:50:26.100775 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:50:26.100785 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:50:26.100794 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:50:26.100803 | orchestrator | 2025-06-22 19:50:26.100813 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-22 19:50:26.100823 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:05.713) 0:00:36.852 *********** 2025-06-22 19:50:26.100836 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:50:26.100846 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:50:26.100856 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:50:26.100911 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:50:26.100922 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:50:26.100932 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:50:26.100941 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:50:26.100951 | orchestrator | 2025-06-22 19:50:26.100961 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-22 19:50:26.100970 | orchestrator | Sunday 22 June 2025 19:48:23 +0000 (0:00:03.443) 0:00:40.296 *********** 2025-06-22 19:50:26.100980 | orchestrator | changed: [testbed-manager] 2025-06-22 19:50:26.100990 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:50:26.101000 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:50:26.101009 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:50:26.101025 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:50:26.101059 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:50:26.101077 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:50:26.101087 | orchestrator | 2025-06-22 19:50:26.101097 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-22 19:50:26.101107 | orchestrator | Sunday 22 June 2025 19:48:27 +0000 (0:00:03.229) 0:00:43.527 *********** 2025-06-22 19:50:26.101117 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.101138 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101148 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.101167 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.101200 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101270 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.101323 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101332 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101341 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101353 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.101380 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.101398 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:50:26.101415 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101423 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101431 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101439 | orchestrator | 2025-06-22 19:50:26.101448 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-22 19:50:26.101456 | orchestrator | Sunday 22 June 2025 19:48:29 +0000 (0:00:02.515) 0:00:46.042 *********** 2025-06-22 19:50:26.101468 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:50:26.101476 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:50:26.101484 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:50:26.101492 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:50:26.101500 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:50:26.101508 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:50:26.101515 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:50:26.101523 | orchestrator | 2025-06-22 19:50:26.101536 | orchestrator | TASK [common : Copy rabbitmq erl_i2025-06-22 19:50:26 | INFO  | Task 13631300-1834-4648-9e60-4b25a0b1bd35 is in state SUCCESS 2025-06-22 19:50:26.101544 | orchestrator | 2025-06-22 19:50:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:26.101684 | orchestrator | netrc to kolla toolbox] ********************** 2025-06-22 19:50:26.101695 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:02.782) 0:00:48.825 *********** 2025-06-22 19:50:26.101703 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:50:26.101711 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:50:26.101719 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:50:26.101727 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:50:26.101734 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:50:26.101742 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:50:26.101750 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:50:26.101757 | orchestrator | 2025-06-22 19:50:26.101765 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-22 19:50:26.101773 | orchestrator | Sunday 22 June 2025 19:48:35 +0000 (0:00:03.592) 0:00:52.418 *********** 2025-06-22 19:50:26.101781 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101843 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:50:26.101901 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101912 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101921 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101959 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101970 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.101978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.102054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:50:26.102074 | orchestrator | 2025-06-22 19:50:26.102088 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-22 19:50:26.102101 | orchestrator | Sunday 22 June 2025 19:48:39 +0000 (0:00:03.596) 0:00:56.014 *********** 2025-06-22 19:50:26.102121 | orchestrator | changed: [testbed-manager] 2025-06-22 19:50:26.102135 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:50:26.102147 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:50:26.102161 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:50:26.102174 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:50:26.102186 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:50:26.102194 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:50:26.102201 | orchestrator | 2025-06-22 19:50:26.102227 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-22 19:50:26.102235 | orchestrator | Sunday 22 June 2025 19:48:41 +0000 (0:00:01.687) 0:00:57.702 *********** 2025-06-22 19:50:26.102243 | orchestrator | changed: [testbed-manager] 2025-06-22 19:50:26.102251 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:50:26.102259 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:50:26.102266 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:50:26.102274 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:50:26.102282 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:50:26.102290 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:50:26.102298 | orchestrator | 2025-06-22 19:50:26.102306 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:50:26.102315 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:01.270) 0:00:58.972 *********** 2025-06-22 19:50:26.102324 | orchestrator | 2025-06-22 19:50:26.102332 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:50:26.102341 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.170) 0:00:59.142 *********** 2025-06-22 19:50:26.102350 | orchestrator | 2025-06-22 19:50:26.102359 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:50:26.102375 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.052) 0:00:59.195 *********** 2025-06-22 19:50:26.102384 | orchestrator | 2025-06-22 19:50:26.102392 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:50:26.102401 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.051) 0:00:59.246 *********** 2025-06-22 19:50:26.102410 | orchestrator | 2025-06-22 19:50:26.102419 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:50:26.102428 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.051) 0:00:59.297 *********** 2025-06-22 19:50:26.102440 | orchestrator | 2025-06-22 19:50:26.102453 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:50:26.102466 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.053) 0:00:59.351 *********** 2025-06-22 19:50:26.102480 | orchestrator | 2025-06-22 19:50:26.102491 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:50:26.102501 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.053) 0:00:59.404 *********** 2025-06-22 19:50:26.102510 | orchestrator | 2025-06-22 19:50:26.102519 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-22 19:50:26.102528 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.078) 0:00:59.482 *********** 2025-06-22 19:50:26.102537 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:50:26.102546 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:50:26.102555 | orchestrator | changed: [testbed-manager] 2025-06-22 19:50:26.102564 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:50:26.102572 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:50:26.102582 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:50:26.102590 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:50:26.102599 | orchestrator | 2025-06-22 19:50:26.102609 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-22 19:50:26.102618 | orchestrator | Sunday 22 June 2025 19:49:29 +0000 (0:00:46.454) 0:01:45.937 *********** 2025-06-22 19:50:26.102627 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:50:26.102636 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:50:26.102645 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:50:26.102653 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:50:26.102661 | orchestrator | changed: [testbed-manager] 2025-06-22 19:50:26.102669 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:50:26.102676 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:50:26.102684 | orchestrator | 2025-06-22 19:50:26.102692 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-22 19:50:26.102704 | orchestrator | Sunday 22 June 2025 19:50:16 +0000 (0:00:47.440) 0:02:33.378 *********** 2025-06-22 19:50:26.102713 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:50:26.102721 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:50:26.102728 | orchestrator | ok: [testbed-manager] 2025-06-22 19:50:26.102736 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:50:26.102744 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:50:26.102752 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:50:26.102759 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:50:26.102767 | orchestrator | 2025-06-22 19:50:26.102775 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-22 19:50:26.102783 | orchestrator | Sunday 22 June 2025 19:50:18 +0000 (0:00:02.027) 0:02:35.406 *********** 2025-06-22 19:50:26.102790 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:50:26.102798 | orchestrator | changed: [testbed-manager] 2025-06-22 19:50:26.102806 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:50:26.102814 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:50:26.102821 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:50:26.102829 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:50:26.102837 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:50:26.102845 | orchestrator | 2025-06-22 19:50:26.102852 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:50:26.102867 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:50:26.102875 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:50:26.102883 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:50:26.102896 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:50:26.102905 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:50:26.102913 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:50:26.102921 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:50:26.102928 | orchestrator | 2025-06-22 19:50:26.102936 | orchestrator | 2025-06-22 19:50:26.102944 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:50:26.102952 | orchestrator | Sunday 22 June 2025 19:50:23 +0000 (0:00:04.310) 0:02:39.717 *********** 2025-06-22 19:50:26.102960 | orchestrator | =============================================================================== 2025-06-22 19:50:26.102967 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 47.44s 2025-06-22 19:50:26.102975 | orchestrator | common : Restart fluentd container ------------------------------------- 46.45s 2025-06-22 19:50:26.102983 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.95s 2025-06-22 19:50:26.102990 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.71s 2025-06-22 19:50:26.102998 | orchestrator | common : Copying over config.json files for services -------------------- 5.63s 2025-06-22 19:50:26.103006 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.39s 2025-06-22 19:50:26.103014 | orchestrator | common : Restart cron container ----------------------------------------- 4.31s 2025-06-22 19:50:26.103021 | orchestrator | common : Check common containers ---------------------------------------- 3.60s 2025-06-22 19:50:26.103029 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.59s 2025-06-22 19:50:26.103037 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.44s 2025-06-22 19:50:26.103045 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.23s 2025-06-22 19:50:26.103052 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.78s 2025-06-22 19:50:26.103060 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.52s 2025-06-22 19:50:26.103068 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.36s 2025-06-22 19:50:26.103076 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.24s 2025-06-22 19:50:26.103083 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.03s 2025-06-22 19:50:26.103091 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.87s 2025-06-22 19:50:26.103098 | orchestrator | common : Creating log volume -------------------------------------------- 1.69s 2025-06-22 19:50:26.103106 | orchestrator | common : include_tasks -------------------------------------------------- 1.61s 2025-06-22 19:50:26.103114 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.34s 2025-06-22 19:50:29.123774 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task cee98f1a-8eb2-4bea-b0de-5e500bf2c0a2 is in state STARTED 2025-06-22 19:50:29.123861 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task cbddede1-51f2-4842-a4a4-2a1c216a27d5 is in state STARTED 2025-06-22 19:50:29.128985 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:29.129470 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:50:29.130785 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:50:29.134702 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:29.134736 | orchestrator | 2025-06-22 19:50:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:32.180974 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task cee98f1a-8eb2-4bea-b0de-5e500bf2c0a2 is in state STARTED 2025-06-22 19:50:32.181362 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task cbddede1-51f2-4842-a4a4-2a1c216a27d5 is in state STARTED 2025-06-22 19:50:32.182082 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:32.183299 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:50:32.183869 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:50:32.185875 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:32.185938 | orchestrator | 2025-06-22 19:50:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:35.240373 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task cee98f1a-8eb2-4bea-b0de-5e500bf2c0a2 is in state STARTED 2025-06-22 19:50:35.240462 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task cbddede1-51f2-4842-a4a4-2a1c216a27d5 is in state STARTED 2025-06-22 19:50:35.240478 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:35.240490 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:50:35.240567 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:50:35.240944 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:35.240964 | orchestrator | 2025-06-22 19:50:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:38.279332 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task cee98f1a-8eb2-4bea-b0de-5e500bf2c0a2 is in state STARTED 2025-06-22 19:50:38.282295 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task cbddede1-51f2-4842-a4a4-2a1c216a27d5 is in state STARTED 2025-06-22 19:50:38.282681 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:38.283465 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:50:38.287174 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:50:38.288506 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:38.288667 | orchestrator | 2025-06-22 19:50:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:41.317862 | orchestrator | 2025-06-22 19:50:41 | INFO  | Task cee98f1a-8eb2-4bea-b0de-5e500bf2c0a2 is in state STARTED 2025-06-22 19:50:41.318114 | orchestrator | 2025-06-22 19:50:41 | INFO  | Task cbddede1-51f2-4842-a4a4-2a1c216a27d5 is in state STARTED 2025-06-22 19:50:41.318521 | orchestrator | 2025-06-22 19:50:41 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:41.319127 | orchestrator | 2025-06-22 19:50:41 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:50:41.320104 | orchestrator | 2025-06-22 19:50:41 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:50:41.320384 | orchestrator | 2025-06-22 19:50:41 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:41.320411 | orchestrator | 2025-06-22 19:50:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:44.347639 | orchestrator | 2025-06-22 19:50:44 | INFO  | Task cee98f1a-8eb2-4bea-b0de-5e500bf2c0a2 is in state SUCCESS 2025-06-22 19:50:44.348528 | orchestrator | 2025-06-22 19:50:44 | INFO  | Task cbddede1-51f2-4842-a4a4-2a1c216a27d5 is in state STARTED 2025-06-22 19:50:44.349760 | orchestrator | 2025-06-22 19:50:44 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:44.350363 | orchestrator | 2025-06-22 19:50:44 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:50:44.350991 | orchestrator | 2025-06-22 19:50:44 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:50:44.353650 | orchestrator | 2025-06-22 19:50:44 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:44.354499 | orchestrator | 2025-06-22 19:50:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:47.386608 | orchestrator | 2025-06-22 19:50:47 | INFO  | Task cbddede1-51f2-4842-a4a4-2a1c216a27d5 is in state STARTED 2025-06-22 19:50:47.388563 | orchestrator | 2025-06-22 19:50:47 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:47.388630 | orchestrator | 2025-06-22 19:50:47 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:50:47.389118 | orchestrator | 2025-06-22 19:50:47 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:50:47.389899 | orchestrator | 2025-06-22 19:50:47 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:50:47.390211 | orchestrator | 2025-06-22 19:50:47 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:47.390347 | orchestrator | 2025-06-22 19:50:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:50.427267 | orchestrator | 2025-06-22 19:50:50 | INFO  | Task cbddede1-51f2-4842-a4a4-2a1c216a27d5 is in state STARTED 2025-06-22 19:50:50.427670 | orchestrator | 2025-06-22 19:50:50 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:50.428617 | orchestrator | 2025-06-22 19:50:50 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:50:50.429583 | orchestrator | 2025-06-22 19:50:50 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:50:50.429913 | orchestrator | 2025-06-22 19:50:50 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:50:50.430460 | orchestrator | 2025-06-22 19:50:50 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:50.430503 | orchestrator | 2025-06-22 19:50:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:53.490064 | orchestrator | 2025-06-22 19:50:53 | INFO  | Task cbddede1-51f2-4842-a4a4-2a1c216a27d5 is in state SUCCESS 2025-06-22 19:50:53.491446 | orchestrator | 2025-06-22 19:50:53.491494 | orchestrator | 2025-06-22 19:50:53.491508 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:50:53.491544 | orchestrator | 2025-06-22 19:50:53.491558 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:50:53.491569 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.317) 0:00:00.317 *********** 2025-06-22 19:50:53.491581 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:50:53.491593 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:50:53.491604 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:50:53.491614 | orchestrator | 2025-06-22 19:50:53.491625 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:50:53.491637 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.359) 0:00:00.676 *********** 2025-06-22 19:50:53.491649 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-22 19:50:53.491660 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-22 19:50:53.491707 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-22 19:50:53.491720 | orchestrator | 2025-06-22 19:50:53.491732 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-22 19:50:53.491744 | orchestrator | 2025-06-22 19:50:53.491755 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-22 19:50:53.491767 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:00.618) 0:00:01.295 *********** 2025-06-22 19:50:53.491844 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:50:53.491860 | orchestrator | 2025-06-22 19:50:53.491872 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-22 19:50:53.491884 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:00.570) 0:00:01.865 *********** 2025-06-22 19:50:53.491896 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-22 19:50:53.491908 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-22 19:50:53.491919 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-22 19:50:53.491930 | orchestrator | 2025-06-22 19:50:53.491941 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-22 19:50:53.491952 | orchestrator | Sunday 22 June 2025 19:50:30 +0000 (0:00:00.935) 0:00:02.800 *********** 2025-06-22 19:50:53.491964 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-22 19:50:53.491975 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-22 19:50:53.491987 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-22 19:50:53.491999 | orchestrator | 2025-06-22 19:50:53.492012 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-22 19:50:53.492023 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:02.398) 0:00:05.199 *********** 2025-06-22 19:50:53.492047 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:50:53.492122 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:50:53.492137 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:50:53.492149 | orchestrator | 2025-06-22 19:50:53.492160 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-22 19:50:53.492171 | orchestrator | Sunday 22 June 2025 19:50:35 +0000 (0:00:01.735) 0:00:06.935 *********** 2025-06-22 19:50:53.492182 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:50:53.492193 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:50:53.492204 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:50:53.492215 | orchestrator | 2025-06-22 19:50:53.492254 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:50:53.492265 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:53.492278 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:53.492289 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:53.492313 | orchestrator | 2025-06-22 19:50:53.492325 | orchestrator | 2025-06-22 19:50:53.492336 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:50:53.492347 | orchestrator | Sunday 22 June 2025 19:50:42 +0000 (0:00:07.623) 0:00:14.559 *********** 2025-06-22 19:50:53.492358 | orchestrator | =============================================================================== 2025-06-22 19:50:53.492368 | orchestrator | memcached : Restart memcached container --------------------------------- 7.62s 2025-06-22 19:50:53.492380 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.40s 2025-06-22 19:50:53.492392 | orchestrator | memcached : Check memcached container ----------------------------------- 1.74s 2025-06-22 19:50:53.492403 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.94s 2025-06-22 19:50:53.492415 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-06-22 19:50:53.492426 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.57s 2025-06-22 19:50:53.492436 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-06-22 19:50:53.492446 | orchestrator | 2025-06-22 19:50:53.492624 | orchestrator | 2025-06-22 19:50:53.492644 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:50:53.492655 | orchestrator | 2025-06-22 19:50:53.492667 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:50:53.492677 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.390) 0:00:00.390 *********** 2025-06-22 19:50:53.492688 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:50:53.492700 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:50:53.492710 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:50:53.492721 | orchestrator | 2025-06-22 19:50:53.492732 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:50:53.492742 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.422) 0:00:00.812 *********** 2025-06-22 19:50:53.492753 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-22 19:50:53.492763 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-22 19:50:53.492774 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-22 19:50:53.492784 | orchestrator | 2025-06-22 19:50:53.492795 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-22 19:50:53.492807 | orchestrator | 2025-06-22 19:50:53.492818 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-22 19:50:53.492829 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:00.408) 0:00:01.220 *********** 2025-06-22 19:50:53.492840 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:50:53.492851 | orchestrator | 2025-06-22 19:50:53.492861 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-22 19:50:53.492871 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:00.508) 0:00:01.729 *********** 2025-06-22 19:50:53.492885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.492900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.492930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.492942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.492966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.492979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.492991 | orchestrator | 2025-06-22 19:50:53.493003 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-22 19:50:53.493013 | orchestrator | Sunday 22 June 2025 19:50:31 +0000 (0:00:01.501) 0:00:03.230 *********** 2025-06-22 19:50:53.493025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493115 | orchestrator | 2025-06-22 19:50:53.493126 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-22 19:50:53.493136 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:02.881) 0:00:06.112 *********** 2025-06-22 19:50:53.493147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493265 | orchestrator | 2025-06-22 19:50:53.493277 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-22 19:50:53.493291 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:02.754) 0:00:08.867 *********** 2025-06-22 19:50:53.493307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:50:53.493417 | orchestrator | 2025-06-22 19:50:53.493428 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-22 19:50:53.493438 | orchestrator | Sunday 22 June 2025 19:50:38 +0000 (0:00:01.784) 0:00:10.652 *********** 2025-06-22 19:50:53.493448 | orchestrator | 2025-06-22 19:50:53.493459 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-22 19:50:53.493470 | orchestrator | Sunday 22 June 2025 19:50:38 +0000 (0:00:00.232) 0:00:10.884 *********** 2025-06-22 19:50:53.493480 | orchestrator | 2025-06-22 19:50:53.493490 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-22 19:50:53.493508 | orchestrator | Sunday 22 June 2025 19:50:39 +0000 (0:00:00.268) 0:00:11.153 *********** 2025-06-22 19:50:53.493519 | orchestrator | 2025-06-22 19:50:53.493530 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-22 19:50:53.493541 | orchestrator | Sunday 22 June 2025 19:50:39 +0000 (0:00:00.152) 0:00:11.306 *********** 2025-06-22 19:50:53.493551 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:50:53.493563 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:50:53.493573 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:50:53.493584 | orchestrator | 2025-06-22 19:50:53.493594 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-22 19:50:53.493604 | orchestrator | Sunday 22 June 2025 19:50:47 +0000 (0:00:08.549) 0:00:19.855 *********** 2025-06-22 19:50:53.493614 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:50:53.493623 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:50:53.493633 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:50:53.493643 | orchestrator | 2025-06-22 19:50:53.493654 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:50:53.493663 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:53.493674 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:53.493684 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:53.493695 | orchestrator | 2025-06-22 19:50:53.493705 | orchestrator | 2025-06-22 19:50:53.493716 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:50:53.493725 | orchestrator | Sunday 22 June 2025 19:50:51 +0000 (0:00:03.733) 0:00:23.588 *********** 2025-06-22 19:50:53.493741 | orchestrator | =============================================================================== 2025-06-22 19:50:53.493751 | orchestrator | redis : Restart redis container ----------------------------------------- 8.55s 2025-06-22 19:50:53.493762 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.73s 2025-06-22 19:50:53.493773 | orchestrator | redis : Copying over default config.json files -------------------------- 2.88s 2025-06-22 19:50:53.493783 | orchestrator | redis : Copying over redis config files --------------------------------- 2.75s 2025-06-22 19:50:53.493793 | orchestrator | redis : Check redis containers ------------------------------------------ 1.79s 2025-06-22 19:50:53.493802 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.50s 2025-06-22 19:50:53.493812 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.65s 2025-06-22 19:50:53.493823 | orchestrator | redis : include_tasks --------------------------------------------------- 0.51s 2025-06-22 19:50:53.493833 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-06-22 19:50:53.493843 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-06-22 19:50:53.496311 | orchestrator | 2025-06-22 19:50:53 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:53.498169 | orchestrator | 2025-06-22 19:50:53 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:50:53.500053 | orchestrator | 2025-06-22 19:50:53 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:50:53.503447 | orchestrator | 2025-06-22 19:50:53 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:50:53.505359 | orchestrator | 2025-06-22 19:50:53 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:53.505399 | orchestrator | 2025-06-22 19:50:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:56.537834 | orchestrator | 2025-06-22 19:50:56 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:56.538495 | orchestrator | 2025-06-22 19:50:56 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:50:56.538841 | orchestrator | 2025-06-22 19:50:56 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:50:56.543428 | orchestrator | 2025-06-22 19:50:56 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:50:56.543464 | orchestrator | 2025-06-22 19:50:56 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:56.543484 | orchestrator | 2025-06-22 19:50:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:59.571366 | orchestrator | 2025-06-22 19:50:59 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:50:59.571616 | orchestrator | 2025-06-22 19:50:59 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:50:59.572460 | orchestrator | 2025-06-22 19:50:59 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:50:59.573064 | orchestrator | 2025-06-22 19:50:59 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:50:59.573680 | orchestrator | 2025-06-22 19:50:59 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:50:59.573703 | orchestrator | 2025-06-22 19:50:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:02.615575 | orchestrator | 2025-06-22 19:51:02 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:02.615999 | orchestrator | 2025-06-22 19:51:02 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:02.616987 | orchestrator | 2025-06-22 19:51:02 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:51:02.617744 | orchestrator | 2025-06-22 19:51:02 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:02.619012 | orchestrator | 2025-06-22 19:51:02 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:02.619045 | orchestrator | 2025-06-22 19:51:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:05.664408 | orchestrator | 2025-06-22 19:51:05 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:05.664620 | orchestrator | 2025-06-22 19:51:05 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:05.665101 | orchestrator | 2025-06-22 19:51:05 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:51:05.667321 | orchestrator | 2025-06-22 19:51:05 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:05.667874 | orchestrator | 2025-06-22 19:51:05 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:05.667900 | orchestrator | 2025-06-22 19:51:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:08.698354 | orchestrator | 2025-06-22 19:51:08 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:08.698413 | orchestrator | 2025-06-22 19:51:08 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:08.698422 | orchestrator | 2025-06-22 19:51:08 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:51:08.698777 | orchestrator | 2025-06-22 19:51:08 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:08.699581 | orchestrator | 2025-06-22 19:51:08 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:08.699648 | orchestrator | 2025-06-22 19:51:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:11.744137 | orchestrator | 2025-06-22 19:51:11 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:11.746156 | orchestrator | 2025-06-22 19:51:11 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:11.749013 | orchestrator | 2025-06-22 19:51:11 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:51:11.752009 | orchestrator | 2025-06-22 19:51:11 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:11.754503 | orchestrator | 2025-06-22 19:51:11 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:11.754544 | orchestrator | 2025-06-22 19:51:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:14.787894 | orchestrator | 2025-06-22 19:51:14 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:14.788112 | orchestrator | 2025-06-22 19:51:14 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:14.788842 | orchestrator | 2025-06-22 19:51:14 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:51:14.790932 | orchestrator | 2025-06-22 19:51:14 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:14.792398 | orchestrator | 2025-06-22 19:51:14 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:14.792487 | orchestrator | 2025-06-22 19:51:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:17.827823 | orchestrator | 2025-06-22 19:51:17 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:17.827916 | orchestrator | 2025-06-22 19:51:17 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:17.828169 | orchestrator | 2025-06-22 19:51:17 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:51:17.828874 | orchestrator | 2025-06-22 19:51:17 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:17.829659 | orchestrator | 2025-06-22 19:51:17 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:17.829693 | orchestrator | 2025-06-22 19:51:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:20.861896 | orchestrator | 2025-06-22 19:51:20 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:20.862869 | orchestrator | 2025-06-22 19:51:20 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:20.864447 | orchestrator | 2025-06-22 19:51:20 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:51:20.865355 | orchestrator | 2025-06-22 19:51:20 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:20.867628 | orchestrator | 2025-06-22 19:51:20 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:20.868205 | orchestrator | 2025-06-22 19:51:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:23.904042 | orchestrator | 2025-06-22 19:51:23 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:23.904187 | orchestrator | 2025-06-22 19:51:23 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:23.904867 | orchestrator | 2025-06-22 19:51:23 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:51:23.905771 | orchestrator | 2025-06-22 19:51:23 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:23.911422 | orchestrator | 2025-06-22 19:51:23 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:23.911446 | orchestrator | 2025-06-22 19:51:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:26.951934 | orchestrator | 2025-06-22 19:51:26 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:26.953278 | orchestrator | 2025-06-22 19:51:26 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:26.955678 | orchestrator | 2025-06-22 19:51:26 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:51:26.958097 | orchestrator | 2025-06-22 19:51:26 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:26.959170 | orchestrator | 2025-06-22 19:51:26 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:26.959328 | orchestrator | 2025-06-22 19:51:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:29.994880 | orchestrator | 2025-06-22 19:51:29 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:29.995934 | orchestrator | 2025-06-22 19:51:29 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:29.996696 | orchestrator | 2025-06-22 19:51:29 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state STARTED 2025-06-22 19:51:29.997449 | orchestrator | 2025-06-22 19:51:29 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:29.998065 | orchestrator | 2025-06-22 19:51:29 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:29.998161 | orchestrator | 2025-06-22 19:51:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:33.045925 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:33.046304 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:33.050193 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task a0ef1382-be38-41a7-b033-277f3c23d5bd is in state SUCCESS 2025-06-22 19:51:33.051785 | orchestrator | 2025-06-22 19:51:33.051863 | orchestrator | 2025-06-22 19:51:33.051879 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:51:33.051892 | orchestrator | 2025-06-22 19:51:33.051903 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:51:33.051915 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.301) 0:00:00.301 *********** 2025-06-22 19:51:33.051927 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:51:33.051940 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:51:33.051952 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:51:33.051964 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:51:33.051976 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:51:33.051988 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:51:33.052000 | orchestrator | 2025-06-22 19:51:33.052013 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:51:33.052033 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.814) 0:00:01.116 *********** 2025-06-22 19:51:33.052048 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:51:33.052060 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:51:33.052071 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:51:33.052082 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:51:33.052112 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:51:33.052123 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:51:33.052134 | orchestrator | 2025-06-22 19:51:33.052145 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-22 19:51:33.052158 | orchestrator | 2025-06-22 19:51:33.052169 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-22 19:51:33.052180 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:00.909) 0:00:02.025 *********** 2025-06-22 19:51:33.052192 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:51:33.052204 | orchestrator | 2025-06-22 19:51:33.052215 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-22 19:51:33.052226 | orchestrator | Sunday 22 June 2025 19:50:31 +0000 (0:00:01.786) 0:00:03.812 *********** 2025-06-22 19:51:33.052237 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-22 19:51:33.052278 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-22 19:51:33.052298 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-22 19:51:33.052317 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-22 19:51:33.052332 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-22 19:51:33.052350 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-22 19:51:33.052361 | orchestrator | 2025-06-22 19:51:33.052372 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-22 19:51:33.052386 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:02.039) 0:00:05.852 *********** 2025-06-22 19:51:33.052398 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-22 19:51:33.052411 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-22 19:51:33.052424 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-22 19:51:33.052437 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-22 19:51:33.052450 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-22 19:51:33.052462 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-22 19:51:33.052475 | orchestrator | 2025-06-22 19:51:33.052488 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-22 19:51:33.052499 | orchestrator | Sunday 22 June 2025 19:50:35 +0000 (0:00:01.904) 0:00:07.757 *********** 2025-06-22 19:51:33.052543 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-22 19:51:33.052554 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:51:33.052565 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-22 19:51:33.052576 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:51:33.052587 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-22 19:51:33.052598 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:51:33.052609 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-22 19:51:33.052620 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-22 19:51:33.052631 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:51:33.052641 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:51:33.052652 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-22 19:51:33.052663 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:51:33.052673 | orchestrator | 2025-06-22 19:51:33.052684 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-22 19:51:33.052695 | orchestrator | Sunday 22 June 2025 19:50:37 +0000 (0:00:02.024) 0:00:09.781 *********** 2025-06-22 19:51:33.052706 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:51:33.052717 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:51:33.052728 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:51:33.052747 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:51:33.052758 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:51:33.052768 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:51:33.052779 | orchestrator | 2025-06-22 19:51:33.052790 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-22 19:51:33.052801 | orchestrator | Sunday 22 June 2025 19:50:38 +0000 (0:00:01.036) 0:00:10.818 *********** 2025-06-22 19:51:33.052828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.052845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.052857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.052875 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.052886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.052910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.052922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.052934 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.052949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.052961 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.052972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.052996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053009 | orchestrator | 2025-06-22 19:51:33.053020 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-22 19:51:33.053031 | orchestrator | Sunday 22 June 2025 19:50:41 +0000 (0:00:02.480) 0:00:13.299 *********** 2025-06-22 19:51:33.053043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053082 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053173 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053190 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053213 | orchestrator | 2025-06-22 19:51:33.053224 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-22 19:51:33.053235 | orchestrator | Sunday 22 June 2025 19:50:44 +0000 (0:00:02.959) 0:00:16.259 *********** 2025-06-22 19:51:33.053284 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:51:33.053296 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:51:33.053307 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:51:33.053318 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:51:33.053328 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:51:33.053339 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:51:33.053350 | orchestrator | 2025-06-22 19:51:33.053361 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-22 19:51:33.053372 | orchestrator | Sunday 22 June 2025 19:50:45 +0000 (0:00:01.060) 0:00:17.319 *********** 2025-06-22 19:51:33.053384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053499 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053542 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:51:33.053564 | orchestrator | 2025-06-22 19:51:33.053575 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:51:33.053586 | orchestrator | Sunday 22 June 2025 19:50:47 +0000 (0:00:02.437) 0:00:19.757 *********** 2025-06-22 19:51:33.053602 | orchestrator | 2025-06-22 19:51:33.053614 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:51:33.053629 | orchestrator | Sunday 22 June 2025 19:50:47 +0000 (0:00:00.146) 0:00:19.903 *********** 2025-06-22 19:51:33.053640 | orchestrator | 2025-06-22 19:51:33.053651 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:51:33.053662 | orchestrator | Sunday 22 June 2025 19:50:47 +0000 (0:00:00.139) 0:00:20.043 *********** 2025-06-22 19:51:33.053673 | orchestrator | 2025-06-22 19:51:33.053684 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:51:33.053695 | orchestrator | Sunday 22 June 2025 19:50:47 +0000 (0:00:00.133) 0:00:20.176 *********** 2025-06-22 19:51:33.053705 | orchestrator | 2025-06-22 19:51:33.053716 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:51:33.053727 | orchestrator | Sunday 22 June 2025 19:50:48 +0000 (0:00:00.222) 0:00:20.399 *********** 2025-06-22 19:51:33.053738 | orchestrator | 2025-06-22 19:51:33.053749 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:51:33.053760 | orchestrator | Sunday 22 June 2025 19:50:48 +0000 (0:00:00.350) 0:00:20.750 *********** 2025-06-22 19:51:33.053771 | orchestrator | 2025-06-22 19:51:33.053781 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-22 19:51:33.053792 | orchestrator | Sunday 22 June 2025 19:50:48 +0000 (0:00:00.436) 0:00:21.187 *********** 2025-06-22 19:51:33.053803 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:51:33.053814 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:51:33.053825 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:51:33.053836 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:51:33.053846 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:51:33.053857 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:51:33.053868 | orchestrator | 2025-06-22 19:51:33.053879 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-22 19:51:33.053890 | orchestrator | Sunday 22 June 2025 19:50:59 +0000 (0:00:11.005) 0:00:32.192 *********** 2025-06-22 19:51:33.053901 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:51:33.053912 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:51:33.053923 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:51:33.053941 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:51:33.053960 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:51:33.053973 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:51:33.053984 | orchestrator | 2025-06-22 19:51:33.053994 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-22 19:51:33.054005 | orchestrator | Sunday 22 June 2025 19:51:01 +0000 (0:00:01.258) 0:00:33.450 *********** 2025-06-22 19:51:33.054114 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:51:33.054131 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:51:33.054142 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:51:33.054153 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:51:33.054164 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:51:33.054175 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:51:33.054186 | orchestrator | 2025-06-22 19:51:33.054197 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-22 19:51:33.054208 | orchestrator | Sunday 22 June 2025 19:51:09 +0000 (0:00:08.597) 0:00:42.048 *********** 2025-06-22 19:51:33.054227 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-22 19:51:33.054238 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-22 19:51:33.054307 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-22 19:51:33.054327 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-22 19:51:33.054339 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-22 19:51:33.054360 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-22 19:51:33.054371 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-22 19:51:33.054382 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-22 19:51:33.054393 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-22 19:51:33.054445 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-22 19:51:33.054459 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-22 19:51:33.054470 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-22 19:51:33.054481 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:51:33.054492 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:51:33.054503 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:51:33.054513 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:51:33.054524 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:51:33.054541 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:51:33.054552 | orchestrator | 2025-06-22 19:51:33.054563 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-22 19:51:33.054574 | orchestrator | Sunday 22 June 2025 19:51:17 +0000 (0:00:07.937) 0:00:49.985 *********** 2025-06-22 19:51:33.054586 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-22 19:51:33.054597 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:51:33.054608 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-22 19:51:33.054619 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:51:33.054630 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-22 19:51:33.054640 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:51:33.054651 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-22 19:51:33.054662 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-22 19:51:33.054673 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-22 19:51:33.054683 | orchestrator | 2025-06-22 19:51:33.054694 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-22 19:51:33.054705 | orchestrator | Sunday 22 June 2025 19:51:19 +0000 (0:00:02.177) 0:00:52.163 *********** 2025-06-22 19:51:33.054716 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-22 19:51:33.054727 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:51:33.054738 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-22 19:51:33.054749 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:51:33.054760 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-22 19:51:33.054771 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:51:33.054781 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-22 19:51:33.054792 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-22 19:51:33.054803 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-22 19:51:33.054813 | orchestrator | 2025-06-22 19:51:33.054823 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-22 19:51:33.054839 | orchestrator | Sunday 22 June 2025 19:51:23 +0000 (0:00:03.437) 0:00:55.600 *********** 2025-06-22 19:51:33.054849 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:51:33.054859 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:51:33.054869 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:51:33.054878 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:51:33.054888 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:51:33.054897 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:51:33.054907 | orchestrator | 2025-06-22 19:51:33.054917 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:51:33.054927 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:51:33.054944 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:51:33.054955 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:51:33.054965 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:51:33.054975 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:51:33.054985 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:51:33.054994 | orchestrator | 2025-06-22 19:51:33.055004 | orchestrator | 2025-06-22 19:51:33.055014 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:51:33.055024 | orchestrator | Sunday 22 June 2025 19:51:31 +0000 (0:00:07.856) 0:01:03.457 *********** 2025-06-22 19:51:33.055034 | orchestrator | =============================================================================== 2025-06-22 19:51:33.055043 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.45s 2025-06-22 19:51:33.055053 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.01s 2025-06-22 19:51:33.055063 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.94s 2025-06-22 19:51:33.055073 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.44s 2025-06-22 19:51:33.055082 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.96s 2025-06-22 19:51:33.055092 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.48s 2025-06-22 19:51:33.055101 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.44s 2025-06-22 19:51:33.055111 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.18s 2025-06-22 19:51:33.055121 | orchestrator | module-load : Load modules ---------------------------------------------- 2.04s 2025-06-22 19:51:33.055131 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.02s 2025-06-22 19:51:33.055141 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.90s 2025-06-22 19:51:33.055150 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.79s 2025-06-22 19:51:33.055160 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.43s 2025-06-22 19:51:33.055173 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.26s 2025-06-22 19:51:33.055184 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.06s 2025-06-22 19:51:33.055193 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.04s 2025-06-22 19:51:33.055203 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.91s 2025-06-22 19:51:33.055218 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.81s 2025-06-22 19:51:33.055228 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:33.055238 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:51:33.055270 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:33.055355 | orchestrator | 2025-06-22 19:51:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:36.086618 | orchestrator | 2025-06-22 19:51:36 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:36.087825 | orchestrator | 2025-06-22 19:51:36 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:36.089422 | orchestrator | 2025-06-22 19:51:36 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:36.090871 | orchestrator | 2025-06-22 19:51:36 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:51:36.091489 | orchestrator | 2025-06-22 19:51:36 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:36.091503 | orchestrator | 2025-06-22 19:51:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:39.124795 | orchestrator | 2025-06-22 19:51:39 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:39.130284 | orchestrator | 2025-06-22 19:51:39 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:39.133049 | orchestrator | 2025-06-22 19:51:39 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:39.134908 | orchestrator | 2025-06-22 19:51:39 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:51:39.137175 | orchestrator | 2025-06-22 19:51:39 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:39.137205 | orchestrator | 2025-06-22 19:51:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:42.168176 | orchestrator | 2025-06-22 19:51:42 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:42.171701 | orchestrator | 2025-06-22 19:51:42 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:42.171746 | orchestrator | 2025-06-22 19:51:42 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:42.171759 | orchestrator | 2025-06-22 19:51:42 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:51:42.171771 | orchestrator | 2025-06-22 19:51:42 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:42.171782 | orchestrator | 2025-06-22 19:51:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:45.213906 | orchestrator | 2025-06-22 19:51:45 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:45.214479 | orchestrator | 2025-06-22 19:51:45 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:45.216242 | orchestrator | 2025-06-22 19:51:45 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:45.216400 | orchestrator | 2025-06-22 19:51:45 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:51:45.219152 | orchestrator | 2025-06-22 19:51:45 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:45.219197 | orchestrator | 2025-06-22 19:51:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:48.251142 | orchestrator | 2025-06-22 19:51:48 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:48.251933 | orchestrator | 2025-06-22 19:51:48 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:48.252738 | orchestrator | 2025-06-22 19:51:48 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:48.253464 | orchestrator | 2025-06-22 19:51:48 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:51:48.254202 | orchestrator | 2025-06-22 19:51:48 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:48.254388 | orchestrator | 2025-06-22 19:51:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:51.278884 | orchestrator | 2025-06-22 19:51:51 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:51.278974 | orchestrator | 2025-06-22 19:51:51 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:51.278990 | orchestrator | 2025-06-22 19:51:51 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:51.279436 | orchestrator | 2025-06-22 19:51:51 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:51:51.280090 | orchestrator | 2025-06-22 19:51:51 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:51.280111 | orchestrator | 2025-06-22 19:51:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:54.323592 | orchestrator | 2025-06-22 19:51:54 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:54.324051 | orchestrator | 2025-06-22 19:51:54 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:54.325647 | orchestrator | 2025-06-22 19:51:54 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:54.326240 | orchestrator | 2025-06-22 19:51:54 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:51:54.327881 | orchestrator | 2025-06-22 19:51:54 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:54.327905 | orchestrator | 2025-06-22 19:51:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:57.363116 | orchestrator | 2025-06-22 19:51:57 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:51:57.366395 | orchestrator | 2025-06-22 19:51:57 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:51:57.366853 | orchestrator | 2025-06-22 19:51:57 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:51:57.369140 | orchestrator | 2025-06-22 19:51:57 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:51:57.369803 | orchestrator | 2025-06-22 19:51:57 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:51:57.369834 | orchestrator | 2025-06-22 19:51:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:00.402111 | orchestrator | 2025-06-22 19:52:00 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:00.402328 | orchestrator | 2025-06-22 19:52:00 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:00.403021 | orchestrator | 2025-06-22 19:52:00 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:00.403821 | orchestrator | 2025-06-22 19:52:00 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:00.404427 | orchestrator | 2025-06-22 19:52:00 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:52:00.404455 | orchestrator | 2025-06-22 19:52:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:03.446440 | orchestrator | 2025-06-22 19:52:03 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:03.446712 | orchestrator | 2025-06-22 19:52:03 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:03.447203 | orchestrator | 2025-06-22 19:52:03 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:03.449464 | orchestrator | 2025-06-22 19:52:03 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:03.452672 | orchestrator | 2025-06-22 19:52:03 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:52:03.454498 | orchestrator | 2025-06-22 19:52:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:06.491069 | orchestrator | 2025-06-22 19:52:06 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:06.494376 | orchestrator | 2025-06-22 19:52:06 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:06.497546 | orchestrator | 2025-06-22 19:52:06 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:06.500585 | orchestrator | 2025-06-22 19:52:06 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:06.503895 | orchestrator | 2025-06-22 19:52:06 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state STARTED 2025-06-22 19:52:06.504155 | orchestrator | 2025-06-22 19:52:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:09.537467 | orchestrator | 2025-06-22 19:52:09 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:09.541868 | orchestrator | 2025-06-22 19:52:09 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:09.541897 | orchestrator | 2025-06-22 19:52:09 | INFO  | Task a7369293-769e-4f9d-8ef0-81134169e69a is in state STARTED 2025-06-22 19:52:09.542615 | orchestrator | 2025-06-22 19:52:09 | INFO  | Task 989b4cc9-2c85-4de7-b73a-0a5156d318ec is in state STARTED 2025-06-22 19:52:09.543760 | orchestrator | 2025-06-22 19:52:09 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:09.544060 | orchestrator | 2025-06-22 19:52:09 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:09.547567 | orchestrator | 2025-06-22 19:52:09 | INFO  | Task 17f02ac5-f512-4153-8d5c-8d54d97b4875 is in state SUCCESS 2025-06-22 19:52:09.547749 | orchestrator | 2025-06-22 19:52:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:09.549605 | orchestrator | 2025-06-22 19:52:09.549647 | orchestrator | 2025-06-22 19:52:09.549659 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-22 19:52:09.549671 | orchestrator | 2025-06-22 19:52:09.549682 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-22 19:52:09.549697 | orchestrator | Sunday 22 June 2025 19:47:43 +0000 (0:00:00.210) 0:00:00.210 *********** 2025-06-22 19:52:09.549716 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:52:09.549735 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:52:09.549754 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:52:09.549771 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.549789 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.549806 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.549824 | orchestrator | 2025-06-22 19:52:09.549842 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-22 19:52:09.549884 | orchestrator | Sunday 22 June 2025 19:47:44 +0000 (0:00:00.690) 0:00:00.900 *********** 2025-06-22 19:52:09.549903 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.549922 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.549941 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.549959 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.549977 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.549996 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.550014 | orchestrator | 2025-06-22 19:52:09.550201 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-22 19:52:09.550223 | orchestrator | Sunday 22 June 2025 19:47:45 +0000 (0:00:00.716) 0:00:01.616 *********** 2025-06-22 19:52:09.550243 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.550262 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.550308 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.550328 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.550349 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.550369 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.550390 | orchestrator | 2025-06-22 19:52:09.550412 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-22 19:52:09.550432 | orchestrator | Sunday 22 June 2025 19:47:46 +0000 (0:00:00.988) 0:00:02.604 *********** 2025-06-22 19:52:09.550452 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:09.550473 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:09.550493 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.550568 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:09.550589 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.550608 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.550627 | orchestrator | 2025-06-22 19:52:09.550646 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-22 19:52:09.550666 | orchestrator | Sunday 22 June 2025 19:47:48 +0000 (0:00:01.997) 0:00:04.602 *********** 2025-06-22 19:52:09.550684 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:09.550704 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:09.550723 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:09.550775 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.550796 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.550814 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.550832 | orchestrator | 2025-06-22 19:52:09.550851 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-22 19:52:09.550870 | orchestrator | Sunday 22 June 2025 19:47:49 +0000 (0:00:01.075) 0:00:05.678 *********** 2025-06-22 19:52:09.550889 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:09.550907 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:09.550924 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:09.550943 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.550960 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.550978 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.550996 | orchestrator | 2025-06-22 19:52:09.551068 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-22 19:52:09.551091 | orchestrator | Sunday 22 June 2025 19:47:50 +0000 (0:00:00.944) 0:00:06.623 *********** 2025-06-22 19:52:09.551110 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.551129 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.551146 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.551164 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.551183 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.551202 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.551220 | orchestrator | 2025-06-22 19:52:09.551239 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-22 19:52:09.551305 | orchestrator | Sunday 22 June 2025 19:47:51 +0000 (0:00:00.728) 0:00:07.351 *********** 2025-06-22 19:52:09.551348 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.551368 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.551388 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.551407 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.551425 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.551444 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.551463 | orchestrator | 2025-06-22 19:52:09.551482 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-22 19:52:09.551500 | orchestrator | Sunday 22 June 2025 19:47:51 +0000 (0:00:00.643) 0:00:07.995 *********** 2025-06-22 19:52:09.551519 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:52:09.551538 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:52:09.551556 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.551575 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:52:09.551594 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:52:09.551612 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.551631 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:52:09.551650 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:52:09.551669 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.551687 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:52:09.551773 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:52:09.551792 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.551810 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:52:09.551828 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:52:09.551845 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.551864 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:52:09.551882 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:52:09.551901 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.551920 | orchestrator | 2025-06-22 19:52:09.551938 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-22 19:52:09.551957 | orchestrator | Sunday 22 June 2025 19:47:52 +0000 (0:00:01.156) 0:00:09.151 *********** 2025-06-22 19:52:09.551975 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.551993 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.552012 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.552030 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.552048 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.552066 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.552084 | orchestrator | 2025-06-22 19:52:09.552102 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-22 19:52:09.552120 | orchestrator | Sunday 22 June 2025 19:47:54 +0000 (0:00:01.817) 0:00:10.969 *********** 2025-06-22 19:52:09.552138 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:52:09.552156 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:52:09.552174 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:52:09.552192 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.552210 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.552228 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.552247 | orchestrator | 2025-06-22 19:52:09.552332 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-22 19:52:09.552403 | orchestrator | Sunday 22 June 2025 19:47:55 +0000 (0:00:00.972) 0:00:11.941 *********** 2025-06-22 19:52:09.552422 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:09.552439 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:09.552472 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:09.552490 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.552508 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.552526 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.552544 | orchestrator | 2025-06-22 19:52:09.552562 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-22 19:52:09.552580 | orchestrator | Sunday 22 June 2025 19:48:01 +0000 (0:00:06.310) 0:00:18.252 *********** 2025-06-22 19:52:09.552598 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.552616 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.552634 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.552686 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.552703 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.552719 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.552735 | orchestrator | 2025-06-22 19:52:09.552751 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-22 19:52:09.552768 | orchestrator | Sunday 22 June 2025 19:48:03 +0000 (0:00:01.055) 0:00:19.307 *********** 2025-06-22 19:52:09.552784 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.552800 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.552816 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.552832 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.552848 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.552864 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.552880 | orchestrator | 2025-06-22 19:52:09.552897 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-22 19:52:09.552914 | orchestrator | Sunday 22 June 2025 19:48:05 +0000 (0:00:02.282) 0:00:21.589 *********** 2025-06-22 19:52:09.552930 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.552946 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.552961 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.552978 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.552994 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.553011 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.553028 | orchestrator | 2025-06-22 19:52:09.553054 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-22 19:52:09.553071 | orchestrator | Sunday 22 June 2025 19:48:06 +0000 (0:00:01.044) 0:00:22.634 *********** 2025-06-22 19:52:09.553088 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-22 19:52:09.553105 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-22 19:52:09.553121 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.553137 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-22 19:52:09.553153 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-22 19:52:09.553169 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.553185 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-22 19:52:09.553202 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-22 19:52:09.553218 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.553234 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-22 19:52:09.553250 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-22 19:52:09.553285 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.553302 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-22 19:52:09.553318 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-22 19:52:09.553335 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.553351 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-22 19:52:09.553368 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-22 19:52:09.553384 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.553400 | orchestrator | 2025-06-22 19:52:09.553416 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-22 19:52:09.553499 | orchestrator | Sunday 22 June 2025 19:48:07 +0000 (0:00:01.243) 0:00:23.878 *********** 2025-06-22 19:52:09.553518 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.553535 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.553551 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.553568 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.553585 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.553601 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.553618 | orchestrator | 2025-06-22 19:52:09.553634 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-22 19:52:09.553651 | orchestrator | 2025-06-22 19:52:09.553668 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-22 19:52:09.553685 | orchestrator | Sunday 22 June 2025 19:48:09 +0000 (0:00:01.845) 0:00:25.723 *********** 2025-06-22 19:52:09.553702 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.553718 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.553735 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.553752 | orchestrator | 2025-06-22 19:52:09.553768 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-22 19:52:09.553852 | orchestrator | Sunday 22 June 2025 19:48:11 +0000 (0:00:01.631) 0:00:27.355 *********** 2025-06-22 19:52:09.553872 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.553889 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.553906 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.553923 | orchestrator | 2025-06-22 19:52:09.553941 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-22 19:52:09.553959 | orchestrator | Sunday 22 June 2025 19:48:12 +0000 (0:00:01.760) 0:00:29.115 *********** 2025-06-22 19:52:09.553970 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.553980 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.553989 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.553999 | orchestrator | 2025-06-22 19:52:09.554014 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-22 19:52:09.554071 | orchestrator | Sunday 22 June 2025 19:48:14 +0000 (0:00:01.208) 0:00:30.324 *********** 2025-06-22 19:52:09.554089 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.554106 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.554122 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.554138 | orchestrator | 2025-06-22 19:52:09.554155 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-22 19:52:09.554172 | orchestrator | Sunday 22 June 2025 19:48:14 +0000 (0:00:00.817) 0:00:31.142 *********** 2025-06-22 19:52:09.554191 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.554208 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.554226 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.554243 | orchestrator | 2025-06-22 19:52:09.554261 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-22 19:52:09.554337 | orchestrator | Sunday 22 June 2025 19:48:15 +0000 (0:00:00.411) 0:00:31.554 *********** 2025-06-22 19:52:09.554349 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:52:09.554359 | orchestrator | 2025-06-22 19:52:09.554369 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-22 19:52:09.554379 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:00.883) 0:00:32.437 *********** 2025-06-22 19:52:09.554388 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.554398 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.554407 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.554417 | orchestrator | 2025-06-22 19:52:09.554430 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-22 19:52:09.554446 | orchestrator | Sunday 22 June 2025 19:48:19 +0000 (0:00:03.311) 0:00:35.749 *********** 2025-06-22 19:52:09.554462 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.554479 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.554495 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.554525 | orchestrator | 2025-06-22 19:52:09.554541 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-22 19:52:09.554558 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.874) 0:00:36.623 *********** 2025-06-22 19:52:09.554574 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.554591 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.554607 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.554623 | orchestrator | 2025-06-22 19:52:09.554638 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-22 19:52:09.554663 | orchestrator | Sunday 22 June 2025 19:48:21 +0000 (0:00:01.428) 0:00:38.052 *********** 2025-06-22 19:52:09.554680 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.554697 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.554714 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.554729 | orchestrator | 2025-06-22 19:52:09.554745 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-22 19:52:09.554761 | orchestrator | Sunday 22 June 2025 19:48:23 +0000 (0:00:02.086) 0:00:40.138 *********** 2025-06-22 19:52:09.554779 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.554794 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.554811 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.554828 | orchestrator | 2025-06-22 19:52:09.554845 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-22 19:52:09.554861 | orchestrator | Sunday 22 June 2025 19:48:24 +0000 (0:00:00.369) 0:00:40.508 *********** 2025-06-22 19:52:09.554878 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.554893 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.554906 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.554918 | orchestrator | 2025-06-22 19:52:09.554932 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-22 19:52:09.554945 | orchestrator | Sunday 22 June 2025 19:48:24 +0000 (0:00:00.565) 0:00:41.074 *********** 2025-06-22 19:52:09.554957 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.554970 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.554983 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.554996 | orchestrator | 2025-06-22 19:52:09.555010 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-22 19:52:09.555023 | orchestrator | Sunday 22 June 2025 19:48:26 +0000 (0:00:01.984) 0:00:43.058 *********** 2025-06-22 19:52:09.555048 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-22 19:52:09.555062 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-22 19:52:09.555076 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-22 19:52:09.555089 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-22 19:52:09.555103 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-22 19:52:09.555117 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-22 19:52:09.555131 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-22 19:52:09.555144 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-22 19:52:09.555157 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-22 19:52:09.555182 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-22 19:52:09.555235 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-22 19:52:09.555249 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-22 19:52:09.555278 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-22 19:52:09.555293 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-22 19:52:09.555307 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-22 19:52:09.555320 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.555334 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.555349 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.555362 | orchestrator | 2025-06-22 19:52:09.555375 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-22 19:52:09.555388 | orchestrator | Sunday 22 June 2025 19:49:21 +0000 (0:00:54.877) 0:01:37.936 *********** 2025-06-22 19:52:09.555402 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.555415 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.555429 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.555443 | orchestrator | 2025-06-22 19:52:09.555456 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-22 19:52:09.555470 | orchestrator | Sunday 22 June 2025 19:49:21 +0000 (0:00:00.313) 0:01:38.249 *********** 2025-06-22 19:52:09.555484 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.555497 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.555510 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.555523 | orchestrator | 2025-06-22 19:52:09.555536 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-22 19:52:09.555557 | orchestrator | Sunday 22 June 2025 19:49:22 +0000 (0:00:00.930) 0:01:39.180 *********** 2025-06-22 19:52:09.555571 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.555584 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.555597 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.555611 | orchestrator | 2025-06-22 19:52:09.555624 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-22 19:52:09.555638 | orchestrator | Sunday 22 June 2025 19:49:24 +0000 (0:00:01.180) 0:01:40.361 *********** 2025-06-22 19:52:09.555651 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.555664 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.555677 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.555690 | orchestrator | 2025-06-22 19:52:09.555703 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-22 19:52:09.555716 | orchestrator | Sunday 22 June 2025 19:49:38 +0000 (0:00:14.412) 0:01:54.774 *********** 2025-06-22 19:52:09.555729 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.555743 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.555756 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.555769 | orchestrator | 2025-06-22 19:52:09.555782 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-22 19:52:09.555796 | orchestrator | Sunday 22 June 2025 19:49:39 +0000 (0:00:00.706) 0:01:55.480 *********** 2025-06-22 19:52:09.555809 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.555822 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.555835 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.555848 | orchestrator | 2025-06-22 19:52:09.555861 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-22 19:52:09.555874 | orchestrator | Sunday 22 June 2025 19:49:39 +0000 (0:00:00.650) 0:01:56.131 *********** 2025-06-22 19:52:09.555896 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.555909 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.555922 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.555934 | orchestrator | 2025-06-22 19:52:09.555956 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-22 19:52:09.556005 | orchestrator | Sunday 22 June 2025 19:49:40 +0000 (0:00:00.654) 0:01:56.785 *********** 2025-06-22 19:52:09.556019 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.556032 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.556045 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.556058 | orchestrator | 2025-06-22 19:52:09.556071 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-22 19:52:09.556084 | orchestrator | Sunday 22 June 2025 19:49:41 +0000 (0:00:01.162) 0:01:57.948 *********** 2025-06-22 19:52:09.556097 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.556110 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.556123 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.556136 | orchestrator | 2025-06-22 19:52:09.556150 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-22 19:52:09.556163 | orchestrator | Sunday 22 June 2025 19:49:41 +0000 (0:00:00.287) 0:01:58.235 *********** 2025-06-22 19:52:09.556176 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.556189 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.556202 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.556215 | orchestrator | 2025-06-22 19:52:09.556228 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-22 19:52:09.556240 | orchestrator | Sunday 22 June 2025 19:49:42 +0000 (0:00:00.708) 0:01:58.944 *********** 2025-06-22 19:52:09.556253 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.556282 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.556296 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.556310 | orchestrator | 2025-06-22 19:52:09.556323 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-22 19:52:09.556337 | orchestrator | Sunday 22 June 2025 19:49:43 +0000 (0:00:00.620) 0:01:59.564 *********** 2025-06-22 19:52:09.556350 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.556363 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.556377 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.556390 | orchestrator | 2025-06-22 19:52:09.556403 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-22 19:52:09.556416 | orchestrator | Sunday 22 June 2025 19:49:44 +0000 (0:00:01.136) 0:02:00.701 *********** 2025-06-22 19:52:09.556430 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:09.556443 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:09.556456 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:09.556470 | orchestrator | 2025-06-22 19:52:09.556485 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-22 19:52:09.556499 | orchestrator | Sunday 22 June 2025 19:49:45 +0000 (0:00:00.926) 0:02:01.628 *********** 2025-06-22 19:52:09.556513 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.556527 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.556540 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.556553 | orchestrator | 2025-06-22 19:52:09.556567 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-22 19:52:09.556580 | orchestrator | Sunday 22 June 2025 19:49:45 +0000 (0:00:00.343) 0:02:01.971 *********** 2025-06-22 19:52:09.556595 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.556609 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.556622 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.556636 | orchestrator | 2025-06-22 19:52:09.556650 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-22 19:52:09.556663 | orchestrator | Sunday 22 June 2025 19:49:46 +0000 (0:00:00.509) 0:02:02.480 *********** 2025-06-22 19:52:09.556676 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.556699 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.556712 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.556726 | orchestrator | 2025-06-22 19:52:09.556739 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-22 19:52:09.556753 | orchestrator | Sunday 22 June 2025 19:49:47 +0000 (0:00:01.044) 0:02:03.525 *********** 2025-06-22 19:52:09.556766 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.556779 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.556792 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.556806 | orchestrator | 2025-06-22 19:52:09.556820 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-22 19:52:09.556835 | orchestrator | Sunday 22 June 2025 19:49:47 +0000 (0:00:00.698) 0:02:04.223 *********** 2025-06-22 19:52:09.556845 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-22 19:52:09.556853 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-22 19:52:09.556861 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-22 19:52:09.556869 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-22 19:52:09.556877 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-22 19:52:09.556885 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-22 19:52:09.556893 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-22 19:52:09.556901 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-22 19:52:09.556908 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-22 19:52:09.556916 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-22 19:52:09.556924 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-22 19:52:09.556932 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-22 19:52:09.556947 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-22 19:52:09.556956 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-22 19:52:09.556964 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-22 19:52:09.556972 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-22 19:52:09.556980 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-22 19:52:09.556988 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-22 19:52:09.556996 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-22 19:52:09.557004 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-22 19:52:09.557012 | orchestrator | 2025-06-22 19:52:09.557020 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-22 19:52:09.557028 | orchestrator | 2025-06-22 19:52:09.557036 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-22 19:52:09.557074 | orchestrator | Sunday 22 June 2025 19:49:50 +0000 (0:00:02.955) 0:02:07.178 *********** 2025-06-22 19:52:09.557082 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:52:09.557089 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:52:09.557097 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:52:09.557105 | orchestrator | 2025-06-22 19:52:09.557113 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-22 19:52:09.557126 | orchestrator | Sunday 22 June 2025 19:49:51 +0000 (0:00:00.836) 0:02:08.015 *********** 2025-06-22 19:52:09.557134 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:52:09.557142 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:52:09.557150 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:52:09.557157 | orchestrator | 2025-06-22 19:52:09.557166 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-22 19:52:09.557173 | orchestrator | Sunday 22 June 2025 19:49:52 +0000 (0:00:00.730) 0:02:08.746 *********** 2025-06-22 19:52:09.557181 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:52:09.557193 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:52:09.557206 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:52:09.557220 | orchestrator | 2025-06-22 19:52:09.557232 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-22 19:52:09.557243 | orchestrator | Sunday 22 June 2025 19:49:52 +0000 (0:00:00.331) 0:02:09.077 *********** 2025-06-22 19:52:09.557254 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:52:09.557362 | orchestrator | 2025-06-22 19:52:09.557386 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-22 19:52:09.557394 | orchestrator | Sunday 22 June 2025 19:49:53 +0000 (0:00:00.720) 0:02:09.798 *********** 2025-06-22 19:52:09.557402 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.557410 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.557417 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.557425 | orchestrator | 2025-06-22 19:52:09.557433 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-22 19:52:09.557441 | orchestrator | Sunday 22 June 2025 19:49:53 +0000 (0:00:00.360) 0:02:10.159 *********** 2025-06-22 19:52:09.557448 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.557456 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.557464 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.557471 | orchestrator | 2025-06-22 19:52:09.557479 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-22 19:52:09.557914 | orchestrator | Sunday 22 June 2025 19:49:54 +0000 (0:00:00.379) 0:02:10.539 *********** 2025-06-22 19:52:09.557925 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.557931 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.557938 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.557945 | orchestrator | 2025-06-22 19:52:09.557952 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-22 19:52:09.557959 | orchestrator | Sunday 22 June 2025 19:49:54 +0000 (0:00:00.354) 0:02:10.893 *********** 2025-06-22 19:52:09.557966 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:09.557972 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:09.557979 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:09.557985 | orchestrator | 2025-06-22 19:52:09.557992 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-22 19:52:09.557999 | orchestrator | Sunday 22 June 2025 19:49:56 +0000 (0:00:01.404) 0:02:12.298 *********** 2025-06-22 19:52:09.558005 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:09.558012 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:09.558062 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:09.558070 | orchestrator | 2025-06-22 19:52:09.558076 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-22 19:52:09.558083 | orchestrator | 2025-06-22 19:52:09.558090 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-22 19:52:09.558096 | orchestrator | Sunday 22 June 2025 19:50:04 +0000 (0:00:08.057) 0:02:20.356 *********** 2025-06-22 19:52:09.558103 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:09.558110 | orchestrator | 2025-06-22 19:52:09.558117 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-22 19:52:09.558123 | orchestrator | Sunday 22 June 2025 19:50:04 +0000 (0:00:00.742) 0:02:21.098 *********** 2025-06-22 19:52:09.558140 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:09.558147 | orchestrator | 2025-06-22 19:52:09.558154 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-22 19:52:09.558160 | orchestrator | Sunday 22 June 2025 19:50:05 +0000 (0:00:00.407) 0:02:21.506 *********** 2025-06-22 19:52:09.558167 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-22 19:52:09.558174 | orchestrator | 2025-06-22 19:52:09.558191 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-22 19:52:09.558198 | orchestrator | Sunday 22 June 2025 19:50:06 +0000 (0:00:00.984) 0:02:22.491 *********** 2025-06-22 19:52:09.558204 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:09.558211 | orchestrator | 2025-06-22 19:52:09.558218 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-22 19:52:09.558224 | orchestrator | Sunday 22 June 2025 19:50:06 +0000 (0:00:00.776) 0:02:23.267 *********** 2025-06-22 19:52:09.558231 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:09.558238 | orchestrator | 2025-06-22 19:52:09.558244 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-22 19:52:09.558251 | orchestrator | Sunday 22 June 2025 19:50:07 +0000 (0:00:00.534) 0:02:23.802 *********** 2025-06-22 19:52:09.558258 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 19:52:09.558292 | orchestrator | 2025-06-22 19:52:09.558300 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-22 19:52:09.558307 | orchestrator | Sunday 22 June 2025 19:50:09 +0000 (0:00:01.546) 0:02:25.349 *********** 2025-06-22 19:52:09.558313 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 19:52:09.558320 | orchestrator | 2025-06-22 19:52:09.558327 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-22 19:52:09.558333 | orchestrator | Sunday 22 June 2025 19:50:09 +0000 (0:00:00.805) 0:02:26.154 *********** 2025-06-22 19:52:09.558340 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:09.558346 | orchestrator | 2025-06-22 19:52:09.558353 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-22 19:52:09.558359 | orchestrator | Sunday 22 June 2025 19:50:10 +0000 (0:00:00.406) 0:02:26.560 *********** 2025-06-22 19:52:09.558366 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:09.558373 | orchestrator | 2025-06-22 19:52:09.558379 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-22 19:52:09.558386 | orchestrator | 2025-06-22 19:52:09.558393 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-22 19:52:09.558400 | orchestrator | Sunday 22 June 2025 19:50:10 +0000 (0:00:00.417) 0:02:26.978 *********** 2025-06-22 19:52:09.558406 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:09.558413 | orchestrator | 2025-06-22 19:52:09.558420 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-22 19:52:09.558426 | orchestrator | Sunday 22 June 2025 19:50:10 +0000 (0:00:00.137) 0:02:27.115 *********** 2025-06-22 19:52:09.558433 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:52:09.558440 | orchestrator | 2025-06-22 19:52:09.558446 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-22 19:52:09.558453 | orchestrator | Sunday 22 June 2025 19:50:11 +0000 (0:00:00.419) 0:02:27.535 *********** 2025-06-22 19:52:09.558459 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:09.558466 | orchestrator | 2025-06-22 19:52:09.558473 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-22 19:52:09.558479 | orchestrator | Sunday 22 June 2025 19:50:12 +0000 (0:00:00.750) 0:02:28.285 *********** 2025-06-22 19:52:09.558486 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:09.558492 | orchestrator | 2025-06-22 19:52:09.558499 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-22 19:52:09.558509 | orchestrator | Sunday 22 June 2025 19:50:13 +0000 (0:00:01.647) 0:02:29.933 *********** 2025-06-22 19:52:09.558527 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:09.558539 | orchestrator | 2025-06-22 19:52:09.558553 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-22 19:52:09.558560 | orchestrator | Sunday 22 June 2025 19:50:14 +0000 (0:00:00.776) 0:02:30.709 *********** 2025-06-22 19:52:09.558566 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:09.558573 | orchestrator | 2025-06-22 19:52:09.558580 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-22 19:52:09.558586 | orchestrator | Sunday 22 June 2025 19:50:14 +0000 (0:00:00.434) 0:02:31.144 *********** 2025-06-22 19:52:09.558593 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:09.558600 | orchestrator | 2025-06-22 19:52:09.558606 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-22 19:52:09.558613 | orchestrator | Sunday 22 June 2025 19:50:21 +0000 (0:00:06.244) 0:02:37.389 *********** 2025-06-22 19:52:09.558619 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:09.558626 | orchestrator | 2025-06-22 19:52:09.558633 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-22 19:52:09.558640 | orchestrator | Sunday 22 June 2025 19:50:31 +0000 (0:00:10.214) 0:02:47.603 *********** 2025-06-22 19:52:09.558646 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:09.558653 | orchestrator | 2025-06-22 19:52:09.558659 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-22 19:52:09.558666 | orchestrator | 2025-06-22 19:52:09.558672 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-22 19:52:09.558679 | orchestrator | Sunday 22 June 2025 19:50:31 +0000 (0:00:00.458) 0:02:48.062 *********** 2025-06-22 19:52:09.558686 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.558692 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.558699 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.558705 | orchestrator | 2025-06-22 19:52:09.558712 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-22 19:52:09.558719 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:00.458) 0:02:48.520 *********** 2025-06-22 19:52:09.558725 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.558732 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.558739 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.558745 | orchestrator | 2025-06-22 19:52:09.558752 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-22 19:52:09.558758 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:00.292) 0:02:48.812 *********** 2025-06-22 19:52:09.558765 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:52:09.558771 | orchestrator | 2025-06-22 19:52:09.558778 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-22 19:52:09.558790 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:00.487) 0:02:49.300 *********** 2025-06-22 19:52:09.558796 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:52:09.558803 | orchestrator | 2025-06-22 19:52:09.558810 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-22 19:52:09.558817 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:01.058) 0:02:50.358 *********** 2025-06-22 19:52:09.558823 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:52:09.558830 | orchestrator | 2025-06-22 19:52:09.558837 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-22 19:52:09.558843 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:00.724) 0:02:51.083 *********** 2025-06-22 19:52:09.558850 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.558857 | orchestrator | 2025-06-22 19:52:09.558864 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-22 19:52:09.558871 | orchestrator | Sunday 22 June 2025 19:50:35 +0000 (0:00:00.237) 0:02:51.320 *********** 2025-06-22 19:52:09.558877 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:52:09.558884 | orchestrator | 2025-06-22 19:52:09.558896 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-22 19:52:09.558909 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:01.055) 0:02:52.376 *********** 2025-06-22 19:52:09.558920 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.558927 | orchestrator | 2025-06-22 19:52:09.558934 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-22 19:52:09.558940 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:00.200) 0:02:52.576 *********** 2025-06-22 19:52:09.558947 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.558953 | orchestrator | 2025-06-22 19:52:09.558960 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-22 19:52:09.558967 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:00.210) 0:02:52.787 *********** 2025-06-22 19:52:09.558973 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.558980 | orchestrator | 2025-06-22 19:52:09.558986 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-22 19:52:09.558993 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:00.213) 0:02:53.001 *********** 2025-06-22 19:52:09.559000 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.559006 | orchestrator | 2025-06-22 19:52:09.559013 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-22 19:52:09.559020 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:00.189) 0:02:53.190 *********** 2025-06-22 19:52:09.559026 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:52:09.559033 | orchestrator | 2025-06-22 19:52:09.559039 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-22 19:52:09.559046 | orchestrator | Sunday 22 June 2025 19:50:41 +0000 (0:00:04.936) 0:02:58.126 *********** 2025-06-22 19:52:09.559052 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-22 19:52:09.559059 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-22 19:52:09.559066 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-22 19:52:09.559072 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-22 19:52:09.559082 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-22 19:52:09.559088 | orchestrator | 2025-06-22 19:52:09.559095 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-22 19:52:09.559102 | orchestrator | Sunday 22 June 2025 19:51:38 +0000 (0:00:56.652) 0:03:54.779 *********** 2025-06-22 19:52:09.559108 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:52:09.559115 | orchestrator | 2025-06-22 19:52:09.559122 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-22 19:52:09.559129 | orchestrator | Sunday 22 June 2025 19:51:40 +0000 (0:00:01.647) 0:03:56.426 *********** 2025-06-22 19:52:09.559135 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:52:09.559142 | orchestrator | 2025-06-22 19:52:09.559148 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-22 19:52:09.559155 | orchestrator | Sunday 22 June 2025 19:51:41 +0000 (0:00:01.669) 0:03:58.095 *********** 2025-06-22 19:52:09.559162 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:52:09.559168 | orchestrator | 2025-06-22 19:52:09.559175 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-22 19:52:09.559182 | orchestrator | Sunday 22 June 2025 19:51:43 +0000 (0:00:01.493) 0:03:59.589 *********** 2025-06-22 19:52:09.559188 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.559195 | orchestrator | 2025-06-22 19:52:09.559201 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-22 19:52:09.559208 | orchestrator | Sunday 22 June 2025 19:51:43 +0000 (0:00:00.234) 0:03:59.824 *********** 2025-06-22 19:52:09.559215 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-22 19:52:09.559221 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-22 19:52:09.559232 | orchestrator | 2025-06-22 19:52:09.559239 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-22 19:52:09.559245 | orchestrator | Sunday 22 June 2025 19:51:45 +0000 (0:00:02.087) 0:04:01.911 *********** 2025-06-22 19:52:09.559252 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.559259 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.559280 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.559287 | orchestrator | 2025-06-22 19:52:09.559294 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-22 19:52:09.559301 | orchestrator | Sunday 22 June 2025 19:51:45 +0000 (0:00:00.282) 0:04:02.194 *********** 2025-06-22 19:52:09.559307 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.559314 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.559321 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.559327 | orchestrator | 2025-06-22 19:52:09.559338 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-22 19:52:09.559345 | orchestrator | 2025-06-22 19:52:09.559352 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-22 19:52:09.559359 | orchestrator | Sunday 22 June 2025 19:51:46 +0000 (0:00:00.797) 0:04:02.992 *********** 2025-06-22 19:52:09.559365 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:09.559372 | orchestrator | 2025-06-22 19:52:09.559379 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-22 19:52:09.559385 | orchestrator | Sunday 22 June 2025 19:51:47 +0000 (0:00:00.311) 0:04:03.303 *********** 2025-06-22 19:52:09.559392 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:52:09.559399 | orchestrator | 2025-06-22 19:52:09.559405 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-22 19:52:09.559412 | orchestrator | Sunday 22 June 2025 19:51:47 +0000 (0:00:00.254) 0:04:03.558 *********** 2025-06-22 19:52:09.559419 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:09.559425 | orchestrator | 2025-06-22 19:52:09.559432 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-22 19:52:09.559438 | orchestrator | 2025-06-22 19:52:09.559445 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-22 19:52:09.559452 | orchestrator | Sunday 22 June 2025 19:51:53 +0000 (0:00:05.800) 0:04:09.358 *********** 2025-06-22 19:52:09.559458 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:52:09.559465 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:52:09.559472 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:52:09.559478 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:09.559485 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:09.559492 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:09.559498 | orchestrator | 2025-06-22 19:52:09.559505 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-22 19:52:09.559512 | orchestrator | Sunday 22 June 2025 19:51:53 +0000 (0:00:00.735) 0:04:10.093 *********** 2025-06-22 19:52:09.559518 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-22 19:52:09.559525 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-22 19:52:09.559532 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-22 19:52:09.559538 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-22 19:52:09.559545 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-22 19:52:09.559552 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-22 19:52:09.559558 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-22 19:52:09.559565 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-22 19:52:09.559575 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-22 19:52:09.559582 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-22 19:52:09.559591 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-22 19:52:09.559598 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-22 19:52:09.559605 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-22 19:52:09.559612 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-22 19:52:09.559619 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-22 19:52:09.559625 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-22 19:52:09.559632 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-22 19:52:09.559639 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-22 19:52:09.559645 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-22 19:52:09.559652 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-22 19:52:09.559659 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-22 19:52:09.559665 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-22 19:52:09.559672 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-22 19:52:09.559678 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-22 19:52:09.559685 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-22 19:52:09.559692 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-22 19:52:09.559699 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-22 19:52:09.559705 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-22 19:52:09.559712 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-22 19:52:09.559719 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-22 19:52:09.559725 | orchestrator | 2025-06-22 19:52:09.559736 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-22 19:52:09.559743 | orchestrator | Sunday 22 June 2025 19:52:05 +0000 (0:00:11.915) 0:04:22.009 *********** 2025-06-22 19:52:09.559750 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.559756 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.559763 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.559770 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.559776 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.559783 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.559790 | orchestrator | 2025-06-22 19:52:09.559797 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-22 19:52:09.559804 | orchestrator | Sunday 22 June 2025 19:52:06 +0000 (0:00:00.711) 0:04:22.721 *********** 2025-06-22 19:52:09.559810 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:09.559817 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:09.559824 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:09.559831 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:09.559837 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:09.559844 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:09.559851 | orchestrator | 2025-06-22 19:52:09.559858 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:52:09.559868 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:52:09.559876 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-22 19:52:09.559884 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-22 19:52:09.559897 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-22 19:52:09.559908 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-22 19:52:09.559915 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-22 19:52:09.559922 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-22 19:52:09.559928 | orchestrator | 2025-06-22 19:52:09.559935 | orchestrator | 2025-06-22 19:52:09.559942 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:52:09.559949 | orchestrator | Sunday 22 June 2025 19:52:06 +0000 (0:00:00.387) 0:04:23.108 *********** 2025-06-22 19:52:09.559955 | orchestrator | =============================================================================== 2025-06-22 19:52:09.559962 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 56.65s 2025-06-22 19:52:09.559972 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.88s 2025-06-22 19:52:09.559979 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.41s 2025-06-22 19:52:09.559986 | orchestrator | Manage labels ---------------------------------------------------------- 11.92s 2025-06-22 19:52:09.559992 | orchestrator | kubectl : Install required packages ------------------------------------ 10.21s 2025-06-22 19:52:09.559999 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.06s 2025-06-22 19:52:09.560005 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.31s 2025-06-22 19:52:09.560012 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.24s 2025-06-22 19:52:09.560019 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.80s 2025-06-22 19:52:09.560025 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.94s 2025-06-22 19:52:09.560032 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.31s 2025-06-22 19:52:09.560039 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.96s 2025-06-22 19:52:09.560045 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.28s 2025-06-22 19:52:09.560052 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.09s 2025-06-22 19:52:09.560058 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.09s 2025-06-22 19:52:09.560065 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.00s 2025-06-22 19:52:09.560072 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.98s 2025-06-22 19:52:09.560078 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.85s 2025-06-22 19:52:09.560085 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.82s 2025-06-22 19:52:09.560092 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.76s 2025-06-22 19:52:12.587349 | orchestrator | 2025-06-22 19:52:12 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:12.587471 | orchestrator | 2025-06-22 19:52:12 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:12.588978 | orchestrator | 2025-06-22 19:52:12 | INFO  | Task a7369293-769e-4f9d-8ef0-81134169e69a is in state STARTED 2025-06-22 19:52:12.597861 | orchestrator | 2025-06-22 19:52:12 | INFO  | Task 989b4cc9-2c85-4de7-b73a-0a5156d318ec is in state STARTED 2025-06-22 19:52:12.601545 | orchestrator | 2025-06-22 19:52:12 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:12.605957 | orchestrator | 2025-06-22 19:52:12 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:12.605990 | orchestrator | 2025-06-22 19:52:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:15.649762 | orchestrator | 2025-06-22 19:52:15 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:15.650088 | orchestrator | 2025-06-22 19:52:15 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:15.650385 | orchestrator | 2025-06-22 19:52:15 | INFO  | Task a7369293-769e-4f9d-8ef0-81134169e69a is in state SUCCESS 2025-06-22 19:52:15.651400 | orchestrator | 2025-06-22 19:52:15 | INFO  | Task 989b4cc9-2c85-4de7-b73a-0a5156d318ec is in state STARTED 2025-06-22 19:52:15.652421 | orchestrator | 2025-06-22 19:52:15 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:15.652970 | orchestrator | 2025-06-22 19:52:15 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:15.653095 | orchestrator | 2025-06-22 19:52:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:18.682350 | orchestrator | 2025-06-22 19:52:18 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:18.687992 | orchestrator | 2025-06-22 19:52:18 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:18.688033 | orchestrator | 2025-06-22 19:52:18 | INFO  | Task 989b4cc9-2c85-4de7-b73a-0a5156d318ec is in state SUCCESS 2025-06-22 19:52:18.688041 | orchestrator | 2025-06-22 19:52:18 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:18.689682 | orchestrator | 2025-06-22 19:52:18 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:18.689708 | orchestrator | 2025-06-22 19:52:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:21.732140 | orchestrator | 2025-06-22 19:52:21 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:21.733335 | orchestrator | 2025-06-22 19:52:21 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:21.734128 | orchestrator | 2025-06-22 19:52:21 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:21.735336 | orchestrator | 2025-06-22 19:52:21 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:21.735368 | orchestrator | 2025-06-22 19:52:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:24.782898 | orchestrator | 2025-06-22 19:52:24 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:24.784347 | orchestrator | 2025-06-22 19:52:24 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:24.787105 | orchestrator | 2025-06-22 19:52:24 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:24.789495 | orchestrator | 2025-06-22 19:52:24 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:24.789581 | orchestrator | 2025-06-22 19:52:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:27.841785 | orchestrator | 2025-06-22 19:52:27 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:27.843800 | orchestrator | 2025-06-22 19:52:27 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:27.845450 | orchestrator | 2025-06-22 19:52:27 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:27.847211 | orchestrator | 2025-06-22 19:52:27 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:27.847592 | orchestrator | 2025-06-22 19:52:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:30.892880 | orchestrator | 2025-06-22 19:52:30 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:30.895183 | orchestrator | 2025-06-22 19:52:30 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:30.898243 | orchestrator | 2025-06-22 19:52:30 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:30.900414 | orchestrator | 2025-06-22 19:52:30 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:30.900928 | orchestrator | 2025-06-22 19:52:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:33.957205 | orchestrator | 2025-06-22 19:52:33 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:33.960715 | orchestrator | 2025-06-22 19:52:33 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:33.962860 | orchestrator | 2025-06-22 19:52:33 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:33.965374 | orchestrator | 2025-06-22 19:52:33 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:33.966484 | orchestrator | 2025-06-22 19:52:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:37.022152 | orchestrator | 2025-06-22 19:52:37 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:37.023842 | orchestrator | 2025-06-22 19:52:37 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:37.025501 | orchestrator | 2025-06-22 19:52:37 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:37.027619 | orchestrator | 2025-06-22 19:52:37 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:37.027672 | orchestrator | 2025-06-22 19:52:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:40.079546 | orchestrator | 2025-06-22 19:52:40 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:40.083192 | orchestrator | 2025-06-22 19:52:40 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:40.083261 | orchestrator | 2025-06-22 19:52:40 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:40.084794 | orchestrator | 2025-06-22 19:52:40 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:40.084994 | orchestrator | 2025-06-22 19:52:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:43.135942 | orchestrator | 2025-06-22 19:52:43 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:43.137825 | orchestrator | 2025-06-22 19:52:43 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:43.140950 | orchestrator | 2025-06-22 19:52:43 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:43.143602 | orchestrator | 2025-06-22 19:52:43 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:43.143690 | orchestrator | 2025-06-22 19:52:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:46.194577 | orchestrator | 2025-06-22 19:52:46 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:46.195891 | orchestrator | 2025-06-22 19:52:46 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:46.197937 | orchestrator | 2025-06-22 19:52:46 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:46.199180 | orchestrator | 2025-06-22 19:52:46 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:46.199224 | orchestrator | 2025-06-22 19:52:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:49.238173 | orchestrator | 2025-06-22 19:52:49 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:49.239786 | orchestrator | 2025-06-22 19:52:49 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:49.242123 | orchestrator | 2025-06-22 19:52:49 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:49.243601 | orchestrator | 2025-06-22 19:52:49 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:49.244576 | orchestrator | 2025-06-22 19:52:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:52.278359 | orchestrator | 2025-06-22 19:52:52 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:52.280087 | orchestrator | 2025-06-22 19:52:52 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:52.281560 | orchestrator | 2025-06-22 19:52:52 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:52.283874 | orchestrator | 2025-06-22 19:52:52 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:52.284813 | orchestrator | 2025-06-22 19:52:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:55.368078 | orchestrator | 2025-06-22 19:52:55 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:55.368808 | orchestrator | 2025-06-22 19:52:55 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:55.369689 | orchestrator | 2025-06-22 19:52:55 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:55.370601 | orchestrator | 2025-06-22 19:52:55 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:55.371056 | orchestrator | 2025-06-22 19:52:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:58.412982 | orchestrator | 2025-06-22 19:52:58 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:52:58.415025 | orchestrator | 2025-06-22 19:52:58 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:52:58.416233 | orchestrator | 2025-06-22 19:52:58 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state STARTED 2025-06-22 19:52:58.416923 | orchestrator | 2025-06-22 19:52:58 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:52:58.416961 | orchestrator | 2025-06-22 19:52:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:01.452815 | orchestrator | 2025-06-22 19:53:01 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:01.454238 | orchestrator | 2025-06-22 19:53:01 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:01.454271 | orchestrator | 2025-06-22 19:53:01 | INFO  | Task 5a3250c8-52bb-40fb-817e-d1c339adf269 is in state SUCCESS 2025-06-22 19:53:01.455365 | orchestrator | 2025-06-22 19:53:01.455400 | orchestrator | 2025-06-22 19:53:01.455413 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-22 19:53:01.455424 | orchestrator | 2025-06-22 19:53:01.455435 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-22 19:53:01.455447 | orchestrator | Sunday 22 June 2025 19:52:11 +0000 (0:00:00.155) 0:00:00.155 *********** 2025-06-22 19:53:01.455458 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-22 19:53:01.455469 | orchestrator | 2025-06-22 19:53:01.455480 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-22 19:53:01.455492 | orchestrator | Sunday 22 June 2025 19:52:11 +0000 (0:00:00.686) 0:00:00.841 *********** 2025-06-22 19:53:01.455503 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:01.455514 | orchestrator | 2025-06-22 19:53:01.455541 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-22 19:53:01.455552 | orchestrator | Sunday 22 June 2025 19:52:12 +0000 (0:00:01.083) 0:00:01.925 *********** 2025-06-22 19:53:01.455563 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:01.455574 | orchestrator | 2025-06-22 19:53:01.455585 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:53:01.455596 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:53:01.455609 | orchestrator | 2025-06-22 19:53:01.455620 | orchestrator | 2025-06-22 19:53:01.455631 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:53:01.455669 | orchestrator | Sunday 22 June 2025 19:52:13 +0000 (0:00:00.643) 0:00:02.568 *********** 2025-06-22 19:53:01.455681 | orchestrator | =============================================================================== 2025-06-22 19:53:01.455692 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.08s 2025-06-22 19:53:01.455703 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2025-06-22 19:53:01.455713 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.64s 2025-06-22 19:53:01.455724 | orchestrator | 2025-06-22 19:53:01.455735 | orchestrator | 2025-06-22 19:53:01.455745 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-22 19:53:01.455756 | orchestrator | 2025-06-22 19:53:01.455767 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-22 19:53:01.455778 | orchestrator | Sunday 22 June 2025 19:52:10 +0000 (0:00:00.124) 0:00:00.124 *********** 2025-06-22 19:53:01.455789 | orchestrator | ok: [testbed-manager] 2025-06-22 19:53:01.455800 | orchestrator | 2025-06-22 19:53:01.455811 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-22 19:53:01.455822 | orchestrator | Sunday 22 June 2025 19:52:11 +0000 (0:00:00.535) 0:00:00.659 *********** 2025-06-22 19:53:01.455832 | orchestrator | ok: [testbed-manager] 2025-06-22 19:53:01.455843 | orchestrator | 2025-06-22 19:53:01.455854 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-22 19:53:01.455864 | orchestrator | Sunday 22 June 2025 19:52:11 +0000 (0:00:00.564) 0:00:01.223 *********** 2025-06-22 19:53:01.455875 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-22 19:53:01.455886 | orchestrator | 2025-06-22 19:53:01.455897 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-22 19:53:01.455908 | orchestrator | Sunday 22 June 2025 19:52:12 +0000 (0:00:00.635) 0:00:01.858 *********** 2025-06-22 19:53:01.455918 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:01.455929 | orchestrator | 2025-06-22 19:53:01.455942 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-22 19:53:01.455970 | orchestrator | Sunday 22 June 2025 19:52:13 +0000 (0:00:01.333) 0:00:03.192 *********** 2025-06-22 19:53:01.455982 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:01.455995 | orchestrator | 2025-06-22 19:53:01.456008 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-22 19:53:01.456020 | orchestrator | Sunday 22 June 2025 19:52:15 +0000 (0:00:01.073) 0:00:04.266 *********** 2025-06-22 19:53:01.456033 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 19:53:01.456045 | orchestrator | 2025-06-22 19:53:01.456057 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-22 19:53:01.456070 | orchestrator | Sunday 22 June 2025 19:52:16 +0000 (0:00:01.573) 0:00:05.839 *********** 2025-06-22 19:53:01.456082 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 19:53:01.456094 | orchestrator | 2025-06-22 19:53:01.456106 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-22 19:53:01.456119 | orchestrator | Sunday 22 June 2025 19:52:17 +0000 (0:00:00.742) 0:00:06.582 *********** 2025-06-22 19:53:01.456131 | orchestrator | ok: [testbed-manager] 2025-06-22 19:53:01.456143 | orchestrator | 2025-06-22 19:53:01.456155 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-22 19:53:01.456167 | orchestrator | Sunday 22 June 2025 19:52:17 +0000 (0:00:00.384) 0:00:06.967 *********** 2025-06-22 19:53:01.456179 | orchestrator | ok: [testbed-manager] 2025-06-22 19:53:01.456191 | orchestrator | 2025-06-22 19:53:01.456204 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:53:01.456257 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:53:01.456268 | orchestrator | 2025-06-22 19:53:01.456279 | orchestrator | 2025-06-22 19:53:01.456307 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:53:01.456319 | orchestrator | Sunday 22 June 2025 19:52:18 +0000 (0:00:00.411) 0:00:07.378 *********** 2025-06-22 19:53:01.456330 | orchestrator | =============================================================================== 2025-06-22 19:53:01.456341 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.57s 2025-06-22 19:53:01.456352 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.33s 2025-06-22 19:53:01.456363 | orchestrator | Change server address in the kubeconfig --------------------------------- 1.07s 2025-06-22 19:53:01.456386 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.74s 2025-06-22 19:53:01.456398 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.64s 2025-06-22 19:53:01.456408 | orchestrator | Create .kube directory -------------------------------------------------- 0.56s 2025-06-22 19:53:01.456419 | orchestrator | Get home directory of operator user ------------------------------------- 0.54s 2025-06-22 19:53:01.456430 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.41s 2025-06-22 19:53:01.456441 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.38s 2025-06-22 19:53:01.456451 | orchestrator | 2025-06-22 19:53:01.456462 | orchestrator | 2025-06-22 19:53:01.456473 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-22 19:53:01.456484 | orchestrator | 2025-06-22 19:53:01.456501 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-22 19:53:01.456512 | orchestrator | Sunday 22 June 2025 19:50:47 +0000 (0:00:00.088) 0:00:00.088 *********** 2025-06-22 19:53:01.456523 | orchestrator | ok: [localhost] => { 2025-06-22 19:53:01.456535 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-22 19:53:01.456546 | orchestrator | } 2025-06-22 19:53:01.456557 | orchestrator | 2025-06-22 19:53:01.456568 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-22 19:53:01.456579 | orchestrator | Sunday 22 June 2025 19:50:47 +0000 (0:00:00.059) 0:00:00.148 *********** 2025-06-22 19:53:01.456600 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-22 19:53:01.456612 | orchestrator | ...ignoring 2025-06-22 19:53:01.456623 | orchestrator | 2025-06-22 19:53:01.456634 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-22 19:53:01.456645 | orchestrator | Sunday 22 June 2025 19:50:51 +0000 (0:00:03.576) 0:00:03.725 *********** 2025-06-22 19:53:01.456656 | orchestrator | skipping: [localhost] 2025-06-22 19:53:01.456667 | orchestrator | 2025-06-22 19:53:01.456678 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-22 19:53:01.456706 | orchestrator | Sunday 22 June 2025 19:50:51 +0000 (0:00:00.067) 0:00:03.792 *********** 2025-06-22 19:53:01.456718 | orchestrator | ok: [localhost] 2025-06-22 19:53:01.456729 | orchestrator | 2025-06-22 19:53:01.456740 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:53:01.456751 | orchestrator | 2025-06-22 19:53:01.456761 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:53:01.456772 | orchestrator | Sunday 22 June 2025 19:50:51 +0000 (0:00:00.217) 0:00:04.010 *********** 2025-06-22 19:53:01.456783 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:53:01.456794 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:53:01.456805 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:53:01.456816 | orchestrator | 2025-06-22 19:53:01.456827 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:53:01.456838 | orchestrator | Sunday 22 June 2025 19:50:52 +0000 (0:00:00.283) 0:00:04.293 *********** 2025-06-22 19:53:01.456849 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-22 19:53:01.456860 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-22 19:53:01.456871 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-22 19:53:01.456882 | orchestrator | 2025-06-22 19:53:01.456893 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-22 19:53:01.456903 | orchestrator | 2025-06-22 19:53:01.456914 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-22 19:53:01.456925 | orchestrator | Sunday 22 June 2025 19:50:52 +0000 (0:00:00.459) 0:00:04.753 *********** 2025-06-22 19:53:01.456936 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:53:01.456947 | orchestrator | 2025-06-22 19:53:01.456958 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-22 19:53:01.456969 | orchestrator | Sunday 22 June 2025 19:50:53 +0000 (0:00:00.679) 0:00:05.432 *********** 2025-06-22 19:53:01.456980 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:53:01.456991 | orchestrator | 2025-06-22 19:53:01.457002 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-22 19:53:01.457013 | orchestrator | Sunday 22 June 2025 19:50:54 +0000 (0:00:00.843) 0:00:06.276 *********** 2025-06-22 19:53:01.457042 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:53:01.457053 | orchestrator | 2025-06-22 19:53:01.457064 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-22 19:53:01.457075 | orchestrator | Sunday 22 June 2025 19:50:54 +0000 (0:00:00.329) 0:00:06.605 *********** 2025-06-22 19:53:01.457086 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:53:01.457097 | orchestrator | 2025-06-22 19:53:01.457107 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-22 19:53:01.457119 | orchestrator | Sunday 22 June 2025 19:50:54 +0000 (0:00:00.350) 0:00:06.956 *********** 2025-06-22 19:53:01.457129 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:53:01.457140 | orchestrator | 2025-06-22 19:53:01.457151 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-22 19:53:01.457162 | orchestrator | Sunday 22 June 2025 19:50:55 +0000 (0:00:00.310) 0:00:07.266 *********** 2025-06-22 19:53:01.457173 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:53:01.457184 | orchestrator | 2025-06-22 19:53:01.457214 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-22 19:53:01.457225 | orchestrator | Sunday 22 June 2025 19:50:55 +0000 (0:00:00.591) 0:00:07.858 *********** 2025-06-22 19:53:01.457236 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:53:01.457247 | orchestrator | 2025-06-22 19:53:01.457258 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-22 19:53:01.457276 | orchestrator | Sunday 22 June 2025 19:50:56 +0000 (0:00:00.939) 0:00:08.797 *********** 2025-06-22 19:53:01.457287 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:53:01.457343 | orchestrator | 2025-06-22 19:53:01.457354 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-22 19:53:01.457365 | orchestrator | Sunday 22 June 2025 19:50:57 +0000 (0:00:00.829) 0:00:09.627 *********** 2025-06-22 19:53:01.457376 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:53:01.457386 | orchestrator | 2025-06-22 19:53:01.457397 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-22 19:53:01.457408 | orchestrator | Sunday 22 June 2025 19:50:57 +0000 (0:00:00.499) 0:00:10.126 *********** 2025-06-22 19:53:01.457419 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:53:01.457430 | orchestrator | 2025-06-22 19:53:01.457441 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-22 19:53:01.457457 | orchestrator | Sunday 22 June 2025 19:50:58 +0000 (0:00:00.508) 0:00:10.637 *********** 2025-06-22 19:53:01.457473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:53:01.457491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:53:01.457504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:53:01.457523 | orchestrator | 2025-06-22 19:53:01.457534 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-22 19:53:01.457545 | orchestrator | Sunday 22 June 2025 19:50:59 +0000 (0:00:01.150) 0:00:11.787 *********** 2025-06-22 19:53:01.457571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:53:01.457584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:53:01.457597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:53:01.457616 | orchestrator | 2025-06-22 19:53:01.457627 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-22 19:53:01.457638 | orchestrator | Sunday 22 June 2025 19:51:01 +0000 (0:00:01.510) 0:00:13.297 *********** 2025-06-22 19:53:01.457649 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-22 19:53:01.457659 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-22 19:53:01.457670 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-22 19:53:01.457681 | orchestrator | 2025-06-22 19:53:01.457692 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-22 19:53:01.457703 | orchestrator | Sunday 22 June 2025 19:51:03 +0000 (0:00:02.519) 0:00:15.816 *********** 2025-06-22 19:53:01.457714 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-22 19:53:01.457725 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-22 19:53:01.457736 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-22 19:53:01.457746 | orchestrator | 2025-06-22 19:53:01.457764 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-22 19:53:01.457776 | orchestrator | Sunday 22 June 2025 19:51:05 +0000 (0:00:02.167) 0:00:17.983 *********** 2025-06-22 19:53:01.457786 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-22 19:53:01.457797 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-22 19:53:01.457808 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-22 19:53:01.457819 | orchestrator | 2025-06-22 19:53:01.457829 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-22 19:53:01.457845 | orchestrator | Sunday 22 June 2025 19:51:07 +0000 (0:00:01.259) 0:00:19.243 *********** 2025-06-22 19:53:01.457856 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-22 19:53:01.457867 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-22 19:53:01.457878 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-22 19:53:01.457889 | orchestrator | 2025-06-22 19:53:01.457900 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-22 19:53:01.457911 | orchestrator | Sunday 22 June 2025 19:51:08 +0000 (0:00:01.605) 0:00:20.848 *********** 2025-06-22 19:53:01.457921 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-22 19:53:01.457932 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-22 19:53:01.457943 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-22 19:53:01.457954 | orchestrator | 2025-06-22 19:53:01.457964 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-22 19:53:01.457975 | orchestrator | Sunday 22 June 2025 19:51:10 +0000 (0:00:01.486) 0:00:22.335 *********** 2025-06-22 19:53:01.457986 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-22 19:53:01.457996 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-22 19:53:01.458007 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-22 19:53:01.458063 | orchestrator | 2025-06-22 19:53:01.458078 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-22 19:53:01.458097 | orchestrator | Sunday 22 June 2025 19:51:12 +0000 (0:00:02.125) 0:00:24.460 *********** 2025-06-22 19:53:01.458108 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:53:01.458119 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:53:01.458130 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:53:01.458141 | orchestrator | 2025-06-22 19:53:01.458152 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-22 19:53:01.458163 | orchestrator | Sunday 22 June 2025 19:51:12 +0000 (0:00:00.490) 0:00:24.951 *********** 2025-06-22 19:53:01.458175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:53:01.458198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:53:01.458216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:53:01.458228 | orchestrator | 2025-06-22 19:53:01.458239 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-22 19:53:01.458256 | orchestrator | Sunday 22 June 2025 19:51:14 +0000 (0:00:01.475) 0:00:26.427 *********** 2025-06-22 19:53:01.458267 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:01.458278 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:01.458335 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:01.458350 | orchestrator | 2025-06-22 19:53:01.458362 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-22 19:53:01.458372 | orchestrator | Sunday 22 June 2025 19:51:15 +0000 (0:00:00.837) 0:00:27.264 *********** 2025-06-22 19:53:01.458383 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:01.458394 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:01.458405 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:01.458415 | orchestrator | 2025-06-22 19:53:01.458426 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-22 19:53:01.458437 | orchestrator | Sunday 22 June 2025 19:51:22 +0000 (0:00:07.469) 0:00:34.733 *********** 2025-06-22 19:53:01.458447 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:01.458458 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:01.458469 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:01.458479 | orchestrator | 2025-06-22 19:53:01.458490 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-22 19:53:01.458501 | orchestrator | 2025-06-22 19:53:01.458511 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-22 19:53:01.458522 | orchestrator | Sunday 22 June 2025 19:51:22 +0000 (0:00:00.453) 0:00:35.187 *********** 2025-06-22 19:53:01.458532 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:53:01.458543 | orchestrator | 2025-06-22 19:53:01.458554 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-22 19:53:01.458564 | orchestrator | Sunday 22 June 2025 19:51:23 +0000 (0:00:00.572) 0:00:35.760 *********** 2025-06-22 19:53:01.458574 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:53:01.458583 | orchestrator | 2025-06-22 19:53:01.458593 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-22 19:53:01.458602 | orchestrator | Sunday 22 June 2025 19:51:23 +0000 (0:00:00.380) 0:00:36.141 *********** 2025-06-22 19:53:01.458612 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:01.458621 | orchestrator | 2025-06-22 19:53:01.458631 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-22 19:53:01.458640 | orchestrator | Sunday 22 June 2025 19:51:31 +0000 (0:00:07.550) 0:00:43.692 *********** 2025-06-22 19:53:01.458650 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:01.458659 | orchestrator | 2025-06-22 19:53:01.458668 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-22 19:53:01.458678 | orchestrator | 2025-06-22 19:53:01.458688 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-22 19:53:01.458697 | orchestrator | Sunday 22 June 2025 19:52:20 +0000 (0:00:48.556) 0:01:32.248 *********** 2025-06-22 19:53:01.458706 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:53:01.458716 | orchestrator | 2025-06-22 19:53:01.458725 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-22 19:53:01.458735 | orchestrator | Sunday 22 June 2025 19:52:20 +0000 (0:00:00.598) 0:01:32.846 *********** 2025-06-22 19:53:01.458744 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:53:01.458754 | orchestrator | 2025-06-22 19:53:01.458763 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-22 19:53:01.458773 | orchestrator | Sunday 22 June 2025 19:52:21 +0000 (0:00:00.432) 0:01:33.278 *********** 2025-06-22 19:53:01.458783 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:01.458792 | orchestrator | 2025-06-22 19:53:01.458802 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-22 19:53:01.458811 | orchestrator | Sunday 22 June 2025 19:52:27 +0000 (0:00:06.931) 0:01:40.210 *********** 2025-06-22 19:53:01.458821 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:01.458830 | orchestrator | 2025-06-22 19:53:01.458840 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-22 19:53:01.458862 | orchestrator | 2025-06-22 19:53:01.458872 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-22 19:53:01.458888 | orchestrator | Sunday 22 June 2025 19:52:37 +0000 (0:00:09.540) 0:01:49.751 *********** 2025-06-22 19:53:01.458898 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:53:01.458907 | orchestrator | 2025-06-22 19:53:01.458917 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-22 19:53:01.458927 | orchestrator | Sunday 22 June 2025 19:52:38 +0000 (0:00:00.555) 0:01:50.306 *********** 2025-06-22 19:53:01.458936 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:53:01.458945 | orchestrator | 2025-06-22 19:53:01.458955 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-22 19:53:01.458964 | orchestrator | Sunday 22 June 2025 19:52:38 +0000 (0:00:00.240) 0:01:50.547 *********** 2025-06-22 19:53:01.458974 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:01.458983 | orchestrator | 2025-06-22 19:53:01.458993 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-22 19:53:01.459003 | orchestrator | Sunday 22 June 2025 19:52:40 +0000 (0:00:01.890) 0:01:52.437 *********** 2025-06-22 19:53:01.459013 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:01.459022 | orchestrator | 2025-06-22 19:53:01.459032 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-22 19:53:01.459041 | orchestrator | 2025-06-22 19:53:01.459051 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-22 19:53:01.459132 | orchestrator | Sunday 22 June 2025 19:52:55 +0000 (0:00:14.869) 0:02:07.307 *********** 2025-06-22 19:53:01.459155 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:53:01.459165 | orchestrator | 2025-06-22 19:53:01.459174 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-22 19:53:01.459184 | orchestrator | Sunday 22 June 2025 19:52:55 +0000 (0:00:00.838) 0:02:08.145 *********** 2025-06-22 19:53:01.459193 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-22 19:53:01.459203 | orchestrator | enable_outward_rabbitmq_True 2025-06-22 19:53:01.459213 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-22 19:53:01.459222 | orchestrator | outward_rabbitmq_restart 2025-06-22 19:53:01.459232 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:53:01.459241 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:53:01.459251 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:53:01.459260 | orchestrator | 2025-06-22 19:53:01.459270 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-22 19:53:01.459279 | orchestrator | skipping: no hosts matched 2025-06-22 19:53:01.459289 | orchestrator | 2025-06-22 19:53:01.459313 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-22 19:53:01.459323 | orchestrator | skipping: no hosts matched 2025-06-22 19:53:01.459332 | orchestrator | 2025-06-22 19:53:01.459342 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-22 19:53:01.459351 | orchestrator | skipping: no hosts matched 2025-06-22 19:53:01.459361 | orchestrator | 2025-06-22 19:53:01.459370 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:53:01.459380 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-22 19:53:01.459390 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 19:53:01.459400 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:53:01.459409 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:53:01.459426 | orchestrator | 2025-06-22 19:53:01.459436 | orchestrator | 2025-06-22 19:53:01.459446 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:53:01.459455 | orchestrator | Sunday 22 June 2025 19:52:58 +0000 (0:00:02.350) 0:02:10.496 *********** 2025-06-22 19:53:01.459464 | orchestrator | =============================================================================== 2025-06-22 19:53:01.459474 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 72.97s 2025-06-22 19:53:01.459483 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 16.37s 2025-06-22 19:53:01.459492 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.47s 2025-06-22 19:53:01.459519 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.58s 2025-06-22 19:53:01.459529 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.52s 2025-06-22 19:53:01.459539 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.35s 2025-06-22 19:53:01.459548 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.17s 2025-06-22 19:53:01.459557 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.13s 2025-06-22 19:53:01.459567 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.73s 2025-06-22 19:53:01.459576 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.61s 2025-06-22 19:53:01.459585 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.51s 2025-06-22 19:53:01.459595 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.49s 2025-06-22 19:53:01.459604 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.48s 2025-06-22 19:53:01.459614 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.26s 2025-06-22 19:53:01.459623 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.15s 2025-06-22 19:53:01.459641 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.05s 2025-06-22 19:53:01.459651 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.94s 2025-06-22 19:53:01.459661 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.84s 2025-06-22 19:53:01.459670 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.84s 2025-06-22 19:53:01.459680 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.84s 2025-06-22 19:53:01.459690 | orchestrator | 2025-06-22 19:53:01 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:01.459716 | orchestrator | 2025-06-22 19:53:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:04.485638 | orchestrator | 2025-06-22 19:53:04 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:04.485731 | orchestrator | 2025-06-22 19:53:04 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:04.485745 | orchestrator | 2025-06-22 19:53:04 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:04.485757 | orchestrator | 2025-06-22 19:53:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:07.513934 | orchestrator | 2025-06-22 19:53:07 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:07.514503 | orchestrator | 2025-06-22 19:53:07 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:07.515953 | orchestrator | 2025-06-22 19:53:07 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:07.516196 | orchestrator | 2025-06-22 19:53:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:10.557086 | orchestrator | 2025-06-22 19:53:10 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:10.557209 | orchestrator | 2025-06-22 19:53:10 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:10.559184 | orchestrator | 2025-06-22 19:53:10 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:10.559275 | orchestrator | 2025-06-22 19:53:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:13.609214 | orchestrator | 2025-06-22 19:53:13 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:13.609353 | orchestrator | 2025-06-22 19:53:13 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:13.609370 | orchestrator | 2025-06-22 19:53:13 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:13.609407 | orchestrator | 2025-06-22 19:53:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:16.648818 | orchestrator | 2025-06-22 19:53:16 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:16.650152 | orchestrator | 2025-06-22 19:53:16 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:16.651702 | orchestrator | 2025-06-22 19:53:16 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:16.651907 | orchestrator | 2025-06-22 19:53:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:19.688746 | orchestrator | 2025-06-22 19:53:19 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:19.689908 | orchestrator | 2025-06-22 19:53:19 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:19.692135 | orchestrator | 2025-06-22 19:53:19 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:19.692162 | orchestrator | 2025-06-22 19:53:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:22.761124 | orchestrator | 2025-06-22 19:53:22 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:22.763712 | orchestrator | 2025-06-22 19:53:22 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:22.764710 | orchestrator | 2025-06-22 19:53:22 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:22.764879 | orchestrator | 2025-06-22 19:53:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:25.824844 | orchestrator | 2025-06-22 19:53:25 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:25.824942 | orchestrator | 2025-06-22 19:53:25 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:25.825459 | orchestrator | 2025-06-22 19:53:25 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:25.826297 | orchestrator | 2025-06-22 19:53:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:28.868956 | orchestrator | 2025-06-22 19:53:28 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:28.870070 | orchestrator | 2025-06-22 19:53:28 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:28.871750 | orchestrator | 2025-06-22 19:53:28 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:28.871873 | orchestrator | 2025-06-22 19:53:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:31.918948 | orchestrator | 2025-06-22 19:53:31 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:31.920808 | orchestrator | 2025-06-22 19:53:31 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:31.923238 | orchestrator | 2025-06-22 19:53:31 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:31.923535 | orchestrator | 2025-06-22 19:53:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:34.967095 | orchestrator | 2025-06-22 19:53:34 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:34.967447 | orchestrator | 2025-06-22 19:53:34 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:34.968191 | orchestrator | 2025-06-22 19:53:34 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:34.972351 | orchestrator | 2025-06-22 19:53:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:37.993789 | orchestrator | 2025-06-22 19:53:37 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:37.994011 | orchestrator | 2025-06-22 19:53:37 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:37.994547 | orchestrator | 2025-06-22 19:53:37 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:37.995803 | orchestrator | 2025-06-22 19:53:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:41.045893 | orchestrator | 2025-06-22 19:53:41 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:41.046395 | orchestrator | 2025-06-22 19:53:41 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:41.046884 | orchestrator | 2025-06-22 19:53:41 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:41.047148 | orchestrator | 2025-06-22 19:53:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:44.082229 | orchestrator | 2025-06-22 19:53:44 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:44.086121 | orchestrator | 2025-06-22 19:53:44 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:44.088490 | orchestrator | 2025-06-22 19:53:44 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:44.089061 | orchestrator | 2025-06-22 19:53:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:47.123903 | orchestrator | 2025-06-22 19:53:47 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:47.125755 | orchestrator | 2025-06-22 19:53:47 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:47.127406 | orchestrator | 2025-06-22 19:53:47 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:47.127636 | orchestrator | 2025-06-22 19:53:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:50.173372 | orchestrator | 2025-06-22 19:53:50 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:50.176423 | orchestrator | 2025-06-22 19:53:50 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:50.178396 | orchestrator | 2025-06-22 19:53:50 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:50.178427 | orchestrator | 2025-06-22 19:53:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:53.221513 | orchestrator | 2025-06-22 19:53:53 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:53.221604 | orchestrator | 2025-06-22 19:53:53 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:53.221644 | orchestrator | 2025-06-22 19:53:53 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:53.221657 | orchestrator | 2025-06-22 19:53:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:56.263707 | orchestrator | 2025-06-22 19:53:56 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:56.263800 | orchestrator | 2025-06-22 19:53:56 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:56.265595 | orchestrator | 2025-06-22 19:53:56 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:56.265775 | orchestrator | 2025-06-22 19:53:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:59.306384 | orchestrator | 2025-06-22 19:53:59 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:53:59.306474 | orchestrator | 2025-06-22 19:53:59 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:53:59.306914 | orchestrator | 2025-06-22 19:53:59 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:53:59.306937 | orchestrator | 2025-06-22 19:53:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:02.348645 | orchestrator | 2025-06-22 19:54:02 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:02.350189 | orchestrator | 2025-06-22 19:54:02 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:02.352325 | orchestrator | 2025-06-22 19:54:02 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:54:02.352429 | orchestrator | 2025-06-22 19:54:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:05.394848 | orchestrator | 2025-06-22 19:54:05 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:05.394939 | orchestrator | 2025-06-22 19:54:05 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:05.395205 | orchestrator | 2025-06-22 19:54:05 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:54:05.395237 | orchestrator | 2025-06-22 19:54:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:08.432391 | orchestrator | 2025-06-22 19:54:08 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:08.433187 | orchestrator | 2025-06-22 19:54:08 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:08.434896 | orchestrator | 2025-06-22 19:54:08 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state STARTED 2025-06-22 19:54:08.434928 | orchestrator | 2025-06-22 19:54:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:11.480720 | orchestrator | 2025-06-22 19:54:11 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:11.483165 | orchestrator | 2025-06-22 19:54:11 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:11.486433 | orchestrator | 2025-06-22 19:54:11.486466 | orchestrator | 2025-06-22 19:54:11 | INFO  | Task 35b4dbfd-e5bd-41b2-8b27-b7deaf3c5d48 is in state SUCCESS 2025-06-22 19:54:11.487067 | orchestrator | 2025-06-22 19:54:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:11.488625 | orchestrator | 2025-06-22 19:54:11.488655 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:54:11.488667 | orchestrator | 2025-06-22 19:54:11.488679 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:54:11.488691 | orchestrator | Sunday 22 June 2025 19:51:35 +0000 (0:00:00.165) 0:00:00.165 *********** 2025-06-22 19:54:11.488732 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:54:11.488746 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:54:11.488756 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:54:11.488767 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.488778 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.488789 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.488800 | orchestrator | 2025-06-22 19:54:11.488811 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:54:11.488822 | orchestrator | Sunday 22 June 2025 19:51:35 +0000 (0:00:00.575) 0:00:00.740 *********** 2025-06-22 19:54:11.488833 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-22 19:54:11.488844 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-22 19:54:11.488855 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-22 19:54:11.488866 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-22 19:54:11.488878 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-22 19:54:11.488889 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-22 19:54:11.488900 | orchestrator | 2025-06-22 19:54:11.488912 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-22 19:54:11.488923 | orchestrator | 2025-06-22 19:54:11.488934 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-22 19:54:11.488945 | orchestrator | Sunday 22 June 2025 19:51:37 +0000 (0:00:01.045) 0:00:01.786 *********** 2025-06-22 19:54:11.488958 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:54:11.488970 | orchestrator | 2025-06-22 19:54:11.488982 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-22 19:54:11.488993 | orchestrator | Sunday 22 June 2025 19:51:39 +0000 (0:00:02.017) 0:00:03.804 *********** 2025-06-22 19:54:11.489021 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489036 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489118 | orchestrator | 2025-06-22 19:54:11.489129 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-22 19:54:11.489270 | orchestrator | Sunday 22 June 2025 19:51:41 +0000 (0:00:02.211) 0:00:06.015 *********** 2025-06-22 19:54:11.489286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489356 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489378 | orchestrator | 2025-06-22 19:54:11.489389 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-22 19:54:11.489400 | orchestrator | Sunday 22 June 2025 19:51:42 +0000 (0:00:01.461) 0:00:07.476 *********** 2025-06-22 19:54:11.489421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489497 | orchestrator | 2025-06-22 19:54:11.489508 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-22 19:54:11.489523 | orchestrator | Sunday 22 June 2025 19:51:44 +0000 (0:00:01.396) 0:00:08.873 *********** 2025-06-22 19:54:11.489535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489617 | orchestrator | 2025-06-22 19:54:11.489628 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-22 19:54:11.489638 | orchestrator | Sunday 22 June 2025 19:51:45 +0000 (0:00:01.539) 0:00:10.413 *********** 2025-06-22 19:54:11.489649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.489728 | orchestrator | 2025-06-22 19:54:11.489739 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-22 19:54:11.489750 | orchestrator | Sunday 22 June 2025 19:51:47 +0000 (0:00:01.572) 0:00:11.985 *********** 2025-06-22 19:54:11.489761 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:11.489772 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:11.489782 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:11.489793 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:11.489803 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:11.489814 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:11.489824 | orchestrator | 2025-06-22 19:54:11.489835 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-22 19:54:11.489846 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:02.237) 0:00:14.222 *********** 2025-06-22 19:54:11.489856 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-22 19:54:11.489867 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-22 19:54:11.489878 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-22 19:54:11.489894 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-22 19:54:11.489905 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-22 19:54:11.489916 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:54:11.489926 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-22 19:54:11.489937 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:54:11.489948 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:54:11.489958 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:54:11.489969 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:54:11.489979 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:54:11.489992 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:54:11.490003 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:54:11.490062 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:54:11.490077 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:54:11.490088 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:54:11.490107 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:54:11.490123 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:54:11.490136 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:54:11.490147 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:54:11.490157 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:54:11.490168 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:54:11.490179 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:54:11.490189 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:54:11.490200 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:54:11.490211 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:54:11.490221 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:54:11.490238 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:54:11.490256 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:54:11.490274 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:54:11.490291 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:54:11.490347 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:54:11.490368 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:54:11.490386 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:54:11.490402 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:54:11.490413 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-22 19:54:11.490424 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-22 19:54:11.490435 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-22 19:54:11.490445 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-22 19:54:11.490465 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-22 19:54:11.490476 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-22 19:54:11.490487 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-22 19:54:11.490499 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-22 19:54:11.490510 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-22 19:54:11.490520 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-22 19:54:11.490531 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-22 19:54:11.490551 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-22 19:54:11.490562 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-22 19:54:11.490573 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-22 19:54:11.490584 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-22 19:54:11.490595 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-22 19:54:11.490605 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-22 19:54:11.490616 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-22 19:54:11.490627 | orchestrator | 2025-06-22 19:54:11.490644 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:54:11.490655 | orchestrator | Sunday 22 June 2025 19:52:06 +0000 (0:00:16.899) 0:00:31.122 *********** 2025-06-22 19:54:11.490666 | orchestrator | 2025-06-22 19:54:11.490677 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:54:11.490688 | orchestrator | Sunday 22 June 2025 19:52:06 +0000 (0:00:00.048) 0:00:31.170 *********** 2025-06-22 19:54:11.490698 | orchestrator | 2025-06-22 19:54:11.490709 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:54:11.490720 | orchestrator | Sunday 22 June 2025 19:52:06 +0000 (0:00:00.048) 0:00:31.219 *********** 2025-06-22 19:54:11.490730 | orchestrator | 2025-06-22 19:54:11.490741 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:54:11.490752 | orchestrator | Sunday 22 June 2025 19:52:06 +0000 (0:00:00.052) 0:00:31.272 *********** 2025-06-22 19:54:11.490762 | orchestrator | 2025-06-22 19:54:11.490773 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:54:11.490784 | orchestrator | Sunday 22 June 2025 19:52:06 +0000 (0:00:00.059) 0:00:31.332 *********** 2025-06-22 19:54:11.490794 | orchestrator | 2025-06-22 19:54:11.490805 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:54:11.490816 | orchestrator | Sunday 22 June 2025 19:52:06 +0000 (0:00:00.057) 0:00:31.389 *********** 2025-06-22 19:54:11.490827 | orchestrator | 2025-06-22 19:54:11.490838 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-22 19:54:11.490848 | orchestrator | Sunday 22 June 2025 19:52:06 +0000 (0:00:00.058) 0:00:31.447 *********** 2025-06-22 19:54:11.490859 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:54:11.490870 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:54:11.490881 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:54:11.490891 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.490902 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.490912 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.490923 | orchestrator | 2025-06-22 19:54:11.490934 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-22 19:54:11.490945 | orchestrator | Sunday 22 June 2025 19:52:08 +0000 (0:00:02.038) 0:00:33.486 *********** 2025-06-22 19:54:11.490955 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:11.490966 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:11.490977 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:11.490987 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:11.490998 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:11.491008 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:11.491019 | orchestrator | 2025-06-22 19:54:11.491030 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-22 19:54:11.491050 | orchestrator | 2025-06-22 19:54:11.491061 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-22 19:54:11.491072 | orchestrator | Sunday 22 June 2025 19:52:49 +0000 (0:00:40.367) 0:01:13.853 *********** 2025-06-22 19:54:11.491083 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:54:11.491093 | orchestrator | 2025-06-22 19:54:11.491104 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-22 19:54:11.491115 | orchestrator | Sunday 22 June 2025 19:52:49 +0000 (0:00:00.506) 0:01:14.360 *********** 2025-06-22 19:54:11.491126 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:54:11.491136 | orchestrator | 2025-06-22 19:54:11.491153 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-22 19:54:11.491164 | orchestrator | Sunday 22 June 2025 19:52:50 +0000 (0:00:00.671) 0:01:15.032 *********** 2025-06-22 19:54:11.491175 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.491186 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.491197 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.491207 | orchestrator | 2025-06-22 19:54:11.491218 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-22 19:54:11.491229 | orchestrator | Sunday 22 June 2025 19:52:51 +0000 (0:00:00.896) 0:01:15.928 *********** 2025-06-22 19:54:11.491239 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.491250 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.491261 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.491272 | orchestrator | 2025-06-22 19:54:11.491282 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-22 19:54:11.491293 | orchestrator | Sunday 22 June 2025 19:52:51 +0000 (0:00:00.415) 0:01:16.343 *********** 2025-06-22 19:54:11.491304 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.491369 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.491381 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.491392 | orchestrator | 2025-06-22 19:54:11.491402 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-22 19:54:11.491413 | orchestrator | Sunday 22 June 2025 19:52:51 +0000 (0:00:00.327) 0:01:16.671 *********** 2025-06-22 19:54:11.491451 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.491463 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.491474 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.491484 | orchestrator | 2025-06-22 19:54:11.491495 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-22 19:54:11.491506 | orchestrator | Sunday 22 June 2025 19:52:52 +0000 (0:00:00.636) 0:01:17.308 *********** 2025-06-22 19:54:11.491517 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.491528 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.491538 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.491549 | orchestrator | 2025-06-22 19:54:11.491560 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-22 19:54:11.491571 | orchestrator | Sunday 22 June 2025 19:52:52 +0000 (0:00:00.339) 0:01:17.647 *********** 2025-06-22 19:54:11.491582 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.491593 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.491604 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.491615 | orchestrator | 2025-06-22 19:54:11.491626 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-22 19:54:11.491636 | orchestrator | Sunday 22 June 2025 19:52:53 +0000 (0:00:00.292) 0:01:17.940 *********** 2025-06-22 19:54:11.491647 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.491672 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.491683 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.491694 | orchestrator | 2025-06-22 19:54:11.491705 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-22 19:54:11.491715 | orchestrator | Sunday 22 June 2025 19:52:53 +0000 (0:00:00.318) 0:01:18.258 *********** 2025-06-22 19:54:11.491735 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.491746 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.491756 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.491767 | orchestrator | 2025-06-22 19:54:11.491778 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-22 19:54:11.491789 | orchestrator | Sunday 22 June 2025 19:52:53 +0000 (0:00:00.507) 0:01:18.765 *********** 2025-06-22 19:54:11.491799 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.491810 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.491820 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.491831 | orchestrator | 2025-06-22 19:54:11.491842 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-22 19:54:11.491853 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:00.319) 0:01:19.085 *********** 2025-06-22 19:54:11.491864 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.491875 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.491885 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.491896 | orchestrator | 2025-06-22 19:54:11.491907 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-22 19:54:11.491918 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:00.343) 0:01:19.429 *********** 2025-06-22 19:54:11.491937 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.491956 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.491967 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.491978 | orchestrator | 2025-06-22 19:54:11.491988 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-22 19:54:11.491999 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:00.300) 0:01:19.730 *********** 2025-06-22 19:54:11.492009 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.492020 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.492030 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.492041 | orchestrator | 2025-06-22 19:54:11.492051 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-22 19:54:11.492062 | orchestrator | Sunday 22 June 2025 19:52:55 +0000 (0:00:00.601) 0:01:20.332 *********** 2025-06-22 19:54:11.492072 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.492083 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.492094 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.492104 | orchestrator | 2025-06-22 19:54:11.492115 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-22 19:54:11.492125 | orchestrator | Sunday 22 June 2025 19:52:55 +0000 (0:00:00.348) 0:01:20.680 *********** 2025-06-22 19:54:11.492136 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.492146 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.492157 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.492167 | orchestrator | 2025-06-22 19:54:11.492178 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-22 19:54:11.492189 | orchestrator | Sunday 22 June 2025 19:52:56 +0000 (0:00:00.325) 0:01:21.006 *********** 2025-06-22 19:54:11.492199 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.492210 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.492221 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.492231 | orchestrator | 2025-06-22 19:54:11.492250 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-22 19:54:11.492261 | orchestrator | Sunday 22 June 2025 19:52:56 +0000 (0:00:00.313) 0:01:21.319 *********** 2025-06-22 19:54:11.492272 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.492283 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.492294 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.492304 | orchestrator | 2025-06-22 19:54:11.492335 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-22 19:54:11.492346 | orchestrator | Sunday 22 June 2025 19:52:57 +0000 (0:00:00.496) 0:01:21.816 *********** 2025-06-22 19:54:11.492364 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.492375 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.492386 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.492396 | orchestrator | 2025-06-22 19:54:11.492407 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-22 19:54:11.492417 | orchestrator | Sunday 22 June 2025 19:52:57 +0000 (0:00:00.316) 0:01:22.132 *********** 2025-06-22 19:54:11.492428 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:54:11.492439 | orchestrator | 2025-06-22 19:54:11.492450 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-22 19:54:11.492460 | orchestrator | Sunday 22 June 2025 19:52:57 +0000 (0:00:00.550) 0:01:22.683 *********** 2025-06-22 19:54:11.492471 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.492481 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.492492 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.492503 | orchestrator | 2025-06-22 19:54:11.492513 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-22 19:54:11.492524 | orchestrator | Sunday 22 June 2025 19:52:58 +0000 (0:00:01.055) 0:01:23.739 *********** 2025-06-22 19:54:11.492534 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.492545 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.492555 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.492566 | orchestrator | 2025-06-22 19:54:11.492576 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-22 19:54:11.492587 | orchestrator | Sunday 22 June 2025 19:52:59 +0000 (0:00:00.425) 0:01:24.164 *********** 2025-06-22 19:54:11.492598 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.492609 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.492619 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.492630 | orchestrator | 2025-06-22 19:54:11.492641 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-22 19:54:11.492652 | orchestrator | Sunday 22 June 2025 19:52:59 +0000 (0:00:00.349) 0:01:24.514 *********** 2025-06-22 19:54:11.492668 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.492679 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.492690 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.492700 | orchestrator | 2025-06-22 19:54:11.492711 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-22 19:54:11.492721 | orchestrator | Sunday 22 June 2025 19:53:00 +0000 (0:00:00.325) 0:01:24.839 *********** 2025-06-22 19:54:11.492732 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.492742 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.492753 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.492763 | orchestrator | 2025-06-22 19:54:11.492774 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-22 19:54:11.492784 | orchestrator | Sunday 22 June 2025 19:53:00 +0000 (0:00:00.622) 0:01:25.462 *********** 2025-06-22 19:54:11.492795 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.492806 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.492816 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.492827 | orchestrator | 2025-06-22 19:54:11.492837 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-22 19:54:11.492848 | orchestrator | Sunday 22 June 2025 19:53:01 +0000 (0:00:00.407) 0:01:25.869 *********** 2025-06-22 19:54:11.492859 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.492870 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.492880 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.492891 | orchestrator | 2025-06-22 19:54:11.492902 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-22 19:54:11.492913 | orchestrator | Sunday 22 June 2025 19:53:01 +0000 (0:00:00.487) 0:01:26.357 *********** 2025-06-22 19:54:11.492923 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.492934 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.492951 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.492962 | orchestrator | 2025-06-22 19:54:11.492973 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-22 19:54:11.492983 | orchestrator | Sunday 22 June 2025 19:53:02 +0000 (0:00:00.438) 0:01:26.795 *********** 2025-06-22 19:54:11.492995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493122 | orchestrator | 2025-06-22 19:54:11.493133 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-22 19:54:11.493144 | orchestrator | Sunday 22 June 2025 19:53:03 +0000 (0:00:01.529) 0:01:28.325 *********** 2025-06-22 19:54:11.493155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493275 | orchestrator | 2025-06-22 19:54:11.493287 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-22 19:54:11.493297 | orchestrator | Sunday 22 June 2025 19:53:07 +0000 (0:00:03.792) 0:01:32.118 *********** 2025-06-22 19:54:11.493324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.493452 | orchestrator | 2025-06-22 19:54:11.493463 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:54:11.493474 | orchestrator | Sunday 22 June 2025 19:53:09 +0000 (0:00:02.027) 0:01:34.145 *********** 2025-06-22 19:54:11.493484 | orchestrator | 2025-06-22 19:54:11.493496 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:54:11.493506 | orchestrator | Sunday 22 June 2025 19:53:09 +0000 (0:00:00.083) 0:01:34.229 *********** 2025-06-22 19:54:11.493517 | orchestrator | 2025-06-22 19:54:11.493528 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:54:11.493538 | orchestrator | Sunday 22 June 2025 19:53:09 +0000 (0:00:00.067) 0:01:34.296 *********** 2025-06-22 19:54:11.493549 | orchestrator | 2025-06-22 19:54:11.493560 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-22 19:54:11.493570 | orchestrator | Sunday 22 June 2025 19:53:09 +0000 (0:00:00.083) 0:01:34.380 *********** 2025-06-22 19:54:11.493581 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:11.493592 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:11.493602 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:11.493614 | orchestrator | 2025-06-22 19:54:11.493624 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-22 19:54:11.493635 | orchestrator | Sunday 22 June 2025 19:53:17 +0000 (0:00:08.353) 0:01:42.733 *********** 2025-06-22 19:54:11.493646 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:11.493656 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:11.493667 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:11.493678 | orchestrator | 2025-06-22 19:54:11.493689 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-22 19:54:11.493699 | orchestrator | Sunday 22 June 2025 19:53:24 +0000 (0:00:06.885) 0:01:49.619 *********** 2025-06-22 19:54:11.493710 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:11.493721 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:11.493731 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:11.493742 | orchestrator | 2025-06-22 19:54:11.493753 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-22 19:54:11.493764 | orchestrator | Sunday 22 June 2025 19:53:32 +0000 (0:00:07.403) 0:01:57.022 *********** 2025-06-22 19:54:11.493774 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.493785 | orchestrator | 2025-06-22 19:54:11.493796 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-22 19:54:11.493806 | orchestrator | Sunday 22 June 2025 19:53:32 +0000 (0:00:00.103) 0:01:57.126 *********** 2025-06-22 19:54:11.493817 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.493828 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.493839 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.493850 | orchestrator | 2025-06-22 19:54:11.493866 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-22 19:54:11.493878 | orchestrator | Sunday 22 June 2025 19:53:33 +0000 (0:00:00.693) 0:01:57.819 *********** 2025-06-22 19:54:11.493888 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.493899 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.493910 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:11.493920 | orchestrator | 2025-06-22 19:54:11.493931 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-22 19:54:11.493942 | orchestrator | Sunday 22 June 2025 19:53:33 +0000 (0:00:00.692) 0:01:58.511 *********** 2025-06-22 19:54:11.493953 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.493964 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.493974 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.493990 | orchestrator | 2025-06-22 19:54:11.494001 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-22 19:54:11.494011 | orchestrator | Sunday 22 June 2025 19:53:34 +0000 (0:00:00.769) 0:01:59.281 *********** 2025-06-22 19:54:11.494055 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.494066 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.494110 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:11.494122 | orchestrator | 2025-06-22 19:54:11.494133 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-22 19:54:11.494143 | orchestrator | Sunday 22 June 2025 19:53:35 +0000 (0:00:00.608) 0:01:59.889 *********** 2025-06-22 19:54:11.494154 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.494165 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.494175 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.494186 | orchestrator | 2025-06-22 19:54:11.494197 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-22 19:54:11.494208 | orchestrator | Sunday 22 June 2025 19:53:36 +0000 (0:00:00.978) 0:02:00.868 *********** 2025-06-22 19:54:11.494218 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.494229 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.494240 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.494250 | orchestrator | 2025-06-22 19:54:11.494261 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-22 19:54:11.494272 | orchestrator | Sunday 22 June 2025 19:53:37 +0000 (0:00:01.032) 0:02:01.900 *********** 2025-06-22 19:54:11.494283 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.494293 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.494304 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.494364 | orchestrator | 2025-06-22 19:54:11.494376 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-22 19:54:11.494387 | orchestrator | Sunday 22 June 2025 19:53:37 +0000 (0:00:00.263) 0:02:02.164 *********** 2025-06-22 19:54:11.494404 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494417 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494428 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494440 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494451 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494470 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494490 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494502 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494513 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494524 | orchestrator | 2025-06-22 19:54:11.494535 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-22 19:54:11.494545 | orchestrator | Sunday 22 June 2025 19:53:38 +0000 (0:00:01.331) 0:02:03.496 *********** 2025-06-22 19:54:11.494555 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494569 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494579 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494589 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494666 | orchestrator | 2025-06-22 19:54:11.494675 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-22 19:54:11.494685 | orchestrator | Sunday 22 June 2025 19:53:43 +0000 (0:00:04.629) 0:02:08.125 *********** 2025-06-22 19:54:11.494695 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494709 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494719 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494739 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494782 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:54:11.494802 | orchestrator | 2025-06-22 19:54:11.494812 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:54:11.494822 | orchestrator | Sunday 22 June 2025 19:53:46 +0000 (0:00:03.062) 0:02:11.188 *********** 2025-06-22 19:54:11.494831 | orchestrator | 2025-06-22 19:54:11.494841 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:54:11.494850 | orchestrator | Sunday 22 June 2025 19:53:46 +0000 (0:00:00.058) 0:02:11.246 *********** 2025-06-22 19:54:11.494860 | orchestrator | 2025-06-22 19:54:11.494870 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:54:11.494879 | orchestrator | Sunday 22 June 2025 19:53:46 +0000 (0:00:00.059) 0:02:11.306 *********** 2025-06-22 19:54:11.494889 | orchestrator | 2025-06-22 19:54:11.494898 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-22 19:54:11.494908 | orchestrator | Sunday 22 June 2025 19:53:46 +0000 (0:00:00.058) 0:02:11.364 *********** 2025-06-22 19:54:11.494917 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:11.494927 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:11.494937 | orchestrator | 2025-06-22 19:54:11.494947 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-22 19:54:11.494956 | orchestrator | Sunday 22 June 2025 19:53:52 +0000 (0:00:06.387) 0:02:17.752 *********** 2025-06-22 19:54:11.494966 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:11.494976 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:11.494985 | orchestrator | 2025-06-22 19:54:11.494995 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-22 19:54:11.495009 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:06.126) 0:02:23.878 *********** 2025-06-22 19:54:11.495018 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:11.495028 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:11.495037 | orchestrator | 2025-06-22 19:54:11.495047 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-22 19:54:11.495057 | orchestrator | Sunday 22 June 2025 19:54:05 +0000 (0:00:06.041) 0:02:29.919 *********** 2025-06-22 19:54:11.495074 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:11.495083 | orchestrator | 2025-06-22 19:54:11.495093 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-22 19:54:11.495102 | orchestrator | Sunday 22 June 2025 19:54:05 +0000 (0:00:00.113) 0:02:30.033 *********** 2025-06-22 19:54:11.495112 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.495121 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.495131 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.495141 | orchestrator | 2025-06-22 19:54:11.495150 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-22 19:54:11.495160 | orchestrator | Sunday 22 June 2025 19:54:06 +0000 (0:00:01.006) 0:02:31.040 *********** 2025-06-22 19:54:11.495169 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.495179 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.495188 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:11.495198 | orchestrator | 2025-06-22 19:54:11.495208 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-22 19:54:11.495218 | orchestrator | Sunday 22 June 2025 19:54:06 +0000 (0:00:00.664) 0:02:31.704 *********** 2025-06-22 19:54:11.495227 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.495237 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.495247 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.495256 | orchestrator | 2025-06-22 19:54:11.495266 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-22 19:54:11.495275 | orchestrator | Sunday 22 June 2025 19:54:07 +0000 (0:00:00.743) 0:02:32.447 *********** 2025-06-22 19:54:11.495285 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:11.495294 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:11.495304 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:11.495328 | orchestrator | 2025-06-22 19:54:11.495337 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-22 19:54:11.495347 | orchestrator | Sunday 22 June 2025 19:54:08 +0000 (0:00:00.600) 0:02:33.048 *********** 2025-06-22 19:54:11.495357 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.495366 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.495376 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.495386 | orchestrator | 2025-06-22 19:54:11.495395 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-22 19:54:11.495405 | orchestrator | Sunday 22 June 2025 19:54:09 +0000 (0:00:00.850) 0:02:33.898 *********** 2025-06-22 19:54:11.495414 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:11.495424 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:11.495433 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:11.495443 | orchestrator | 2025-06-22 19:54:11.495452 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:54:11.495462 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-22 19:54:11.495472 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-22 19:54:11.495487 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-22 19:54:11.495498 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:54:11.495507 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:54:11.495517 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:54:11.495527 | orchestrator | 2025-06-22 19:54:11.495537 | orchestrator | 2025-06-22 19:54:11.495547 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:54:11.495563 | orchestrator | Sunday 22 June 2025 19:54:10 +0000 (0:00:00.954) 0:02:34.853 *********** 2025-06-22 19:54:11.495573 | orchestrator | =============================================================================== 2025-06-22 19:54:11.495582 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 40.37s 2025-06-22 19:54:11.495592 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 16.90s 2025-06-22 19:54:11.495602 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.74s 2025-06-22 19:54:11.495611 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.44s 2025-06-22 19:54:11.495621 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.01s 2025-06-22 19:54:11.495630 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.63s 2025-06-22 19:54:11.495640 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.79s 2025-06-22 19:54:11.495649 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.06s 2025-06-22 19:54:11.495659 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.24s 2025-06-22 19:54:11.495668 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.21s 2025-06-22 19:54:11.495678 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.04s 2025-06-22 19:54:11.495687 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.03s 2025-06-22 19:54:11.495702 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.02s 2025-06-22 19:54:11.495712 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.57s 2025-06-22 19:54:11.495721 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.54s 2025-06-22 19:54:11.495731 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.53s 2025-06-22 19:54:11.495741 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.46s 2025-06-22 19:54:11.495750 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.40s 2025-06-22 19:54:11.495760 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.33s 2025-06-22 19:54:11.495770 | orchestrator | ovn-db : Set bootstrap args fact for NB (new cluster) ------------------- 1.06s 2025-06-22 19:54:14.530790 | orchestrator | 2025-06-22 19:54:14 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:14.530881 | orchestrator | 2025-06-22 19:54:14 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:14.530897 | orchestrator | 2025-06-22 19:54:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:17.566687 | orchestrator | 2025-06-22 19:54:17 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:17.569097 | orchestrator | 2025-06-22 19:54:17 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:17.569411 | orchestrator | 2025-06-22 19:54:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:20.610167 | orchestrator | 2025-06-22 19:54:20 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:20.612636 | orchestrator | 2025-06-22 19:54:20 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:20.612682 | orchestrator | 2025-06-22 19:54:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:23.674860 | orchestrator | 2025-06-22 19:54:23 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:23.674954 | orchestrator | 2025-06-22 19:54:23 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:23.674969 | orchestrator | 2025-06-22 19:54:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:26.716925 | orchestrator | 2025-06-22 19:54:26 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:26.717976 | orchestrator | 2025-06-22 19:54:26 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:26.718005 | orchestrator | 2025-06-22 19:54:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:29.768811 | orchestrator | 2025-06-22 19:54:29 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:29.768872 | orchestrator | 2025-06-22 19:54:29 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:29.768883 | orchestrator | 2025-06-22 19:54:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:32.795785 | orchestrator | 2025-06-22 19:54:32 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:32.796446 | orchestrator | 2025-06-22 19:54:32 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:32.796704 | orchestrator | 2025-06-22 19:54:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:35.829817 | orchestrator | 2025-06-22 19:54:35 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:35.829923 | orchestrator | 2025-06-22 19:54:35 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:35.829939 | orchestrator | 2025-06-22 19:54:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:38.872663 | orchestrator | 2025-06-22 19:54:38 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:38.873384 | orchestrator | 2025-06-22 19:54:38 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:38.873477 | orchestrator | 2025-06-22 19:54:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:41.916567 | orchestrator | 2025-06-22 19:54:41 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:41.920301 | orchestrator | 2025-06-22 19:54:41 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:41.920366 | orchestrator | 2025-06-22 19:54:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:44.949868 | orchestrator | 2025-06-22 19:54:44 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:44.951814 | orchestrator | 2025-06-22 19:54:44 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:44.951877 | orchestrator | 2025-06-22 19:54:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:47.984892 | orchestrator | 2025-06-22 19:54:47 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:47.985000 | orchestrator | 2025-06-22 19:54:47 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:47.985015 | orchestrator | 2025-06-22 19:54:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:51.037910 | orchestrator | 2025-06-22 19:54:51 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:51.038627 | orchestrator | 2025-06-22 19:54:51 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:51.038725 | orchestrator | 2025-06-22 19:54:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:54.078004 | orchestrator | 2025-06-22 19:54:54 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:54.078185 | orchestrator | 2025-06-22 19:54:54 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:54.078227 | orchestrator | 2025-06-22 19:54:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:57.112577 | orchestrator | 2025-06-22 19:54:57 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:54:57.113830 | orchestrator | 2025-06-22 19:54:57 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:54:57.114191 | orchestrator | 2025-06-22 19:54:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:00.161469 | orchestrator | 2025-06-22 19:55:00 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:00.165091 | orchestrator | 2025-06-22 19:55:00 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:00.165137 | orchestrator | 2025-06-22 19:55:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:03.198984 | orchestrator | 2025-06-22 19:55:03 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:03.201004 | orchestrator | 2025-06-22 19:55:03 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:03.201061 | orchestrator | 2025-06-22 19:55:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:06.243775 | orchestrator | 2025-06-22 19:55:06 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:06.245303 | orchestrator | 2025-06-22 19:55:06 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:06.245383 | orchestrator | 2025-06-22 19:55:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:09.290230 | orchestrator | 2025-06-22 19:55:09 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:09.290360 | orchestrator | 2025-06-22 19:55:09 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:09.290379 | orchestrator | 2025-06-22 19:55:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:12.337781 | orchestrator | 2025-06-22 19:55:12 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:12.338244 | orchestrator | 2025-06-22 19:55:12 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:12.338292 | orchestrator | 2025-06-22 19:55:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:15.382081 | orchestrator | 2025-06-22 19:55:15 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:15.384129 | orchestrator | 2025-06-22 19:55:15 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:15.384628 | orchestrator | 2025-06-22 19:55:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:18.424017 | orchestrator | 2025-06-22 19:55:18 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:18.424762 | orchestrator | 2025-06-22 19:55:18 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:18.425010 | orchestrator | 2025-06-22 19:55:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:21.471694 | orchestrator | 2025-06-22 19:55:21 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:21.471800 | orchestrator | 2025-06-22 19:55:21 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:21.471817 | orchestrator | 2025-06-22 19:55:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:24.526461 | orchestrator | 2025-06-22 19:55:24 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:24.528186 | orchestrator | 2025-06-22 19:55:24 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:24.528219 | orchestrator | 2025-06-22 19:55:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:27.599583 | orchestrator | 2025-06-22 19:55:27 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:27.600065 | orchestrator | 2025-06-22 19:55:27 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:27.600359 | orchestrator | 2025-06-22 19:55:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:30.655967 | orchestrator | 2025-06-22 19:55:30 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:30.660720 | orchestrator | 2025-06-22 19:55:30 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:30.660788 | orchestrator | 2025-06-22 19:55:30 | INFO  | Task 77779a86-55b7-41e2-963b-16b640bf514d is in state STARTED 2025-06-22 19:55:30.660855 | orchestrator | 2025-06-22 19:55:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:33.713026 | orchestrator | 2025-06-22 19:55:33 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:33.713731 | orchestrator | 2025-06-22 19:55:33 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:33.715445 | orchestrator | 2025-06-22 19:55:33 | INFO  | Task 77779a86-55b7-41e2-963b-16b640bf514d is in state STARTED 2025-06-22 19:55:33.715468 | orchestrator | 2025-06-22 19:55:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:36.757035 | orchestrator | 2025-06-22 19:55:36 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:36.760942 | orchestrator | 2025-06-22 19:55:36 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:36.761759 | orchestrator | 2025-06-22 19:55:36 | INFO  | Task 77779a86-55b7-41e2-963b-16b640bf514d is in state STARTED 2025-06-22 19:55:36.761785 | orchestrator | 2025-06-22 19:55:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:39.805040 | orchestrator | 2025-06-22 19:55:39 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:39.805147 | orchestrator | 2025-06-22 19:55:39 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:39.805480 | orchestrator | 2025-06-22 19:55:39 | INFO  | Task 77779a86-55b7-41e2-963b-16b640bf514d is in state STARTED 2025-06-22 19:55:39.805503 | orchestrator | 2025-06-22 19:55:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:42.844836 | orchestrator | 2025-06-22 19:55:42 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:42.844956 | orchestrator | 2025-06-22 19:55:42 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:42.848049 | orchestrator | 2025-06-22 19:55:42 | INFO  | Task 77779a86-55b7-41e2-963b-16b640bf514d is in state STARTED 2025-06-22 19:55:42.848091 | orchestrator | 2025-06-22 19:55:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:45.899457 | orchestrator | 2025-06-22 19:55:45 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:45.902930 | orchestrator | 2025-06-22 19:55:45 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:45.904213 | orchestrator | 2025-06-22 19:55:45 | INFO  | Task 77779a86-55b7-41e2-963b-16b640bf514d is in state SUCCESS 2025-06-22 19:55:45.905018 | orchestrator | 2025-06-22 19:55:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:48.956473 | orchestrator | 2025-06-22 19:55:48 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:48.956579 | orchestrator | 2025-06-22 19:55:48 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:48.956602 | orchestrator | 2025-06-22 19:55:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:52.009668 | orchestrator | 2025-06-22 19:55:52 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:52.011465 | orchestrator | 2025-06-22 19:55:52 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:52.011504 | orchestrator | 2025-06-22 19:55:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:55.054988 | orchestrator | 2025-06-22 19:55:55 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:55.057253 | orchestrator | 2025-06-22 19:55:55 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:55.057302 | orchestrator | 2025-06-22 19:55:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:58.111132 | orchestrator | 2025-06-22 19:55:58 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:55:58.112626 | orchestrator | 2025-06-22 19:55:58 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:55:58.112660 | orchestrator | 2025-06-22 19:55:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:01.166651 | orchestrator | 2025-06-22 19:56:01 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:01.167785 | orchestrator | 2025-06-22 19:56:01 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:56:01.167953 | orchestrator | 2025-06-22 19:56:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:04.220994 | orchestrator | 2025-06-22 19:56:04 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:04.221905 | orchestrator | 2025-06-22 19:56:04 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:56:04.221939 | orchestrator | 2025-06-22 19:56:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:07.278758 | orchestrator | 2025-06-22 19:56:07 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:07.280005 | orchestrator | 2025-06-22 19:56:07 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:56:07.280040 | orchestrator | 2025-06-22 19:56:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:10.336104 | orchestrator | 2025-06-22 19:56:10 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:10.337162 | orchestrator | 2025-06-22 19:56:10 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:56:10.337203 | orchestrator | 2025-06-22 19:56:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:13.389512 | orchestrator | 2025-06-22 19:56:13 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:13.392667 | orchestrator | 2025-06-22 19:56:13 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:56:13.392739 | orchestrator | 2025-06-22 19:56:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:16.451034 | orchestrator | 2025-06-22 19:56:16 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:16.453499 | orchestrator | 2025-06-22 19:56:16 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:56:16.453579 | orchestrator | 2025-06-22 19:56:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:19.499931 | orchestrator | 2025-06-22 19:56:19 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:19.501535 | orchestrator | 2025-06-22 19:56:19 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:56:19.501583 | orchestrator | 2025-06-22 19:56:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:22.559536 | orchestrator | 2025-06-22 19:56:22 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:22.561117 | orchestrator | 2025-06-22 19:56:22 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:56:22.561162 | orchestrator | 2025-06-22 19:56:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:25.613771 | orchestrator | 2025-06-22 19:56:25 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:25.614802 | orchestrator | 2025-06-22 19:56:25 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:56:25.615162 | orchestrator | 2025-06-22 19:56:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:28.661995 | orchestrator | 2025-06-22 19:56:28 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:28.662155 | orchestrator | 2025-06-22 19:56:28 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state STARTED 2025-06-22 19:56:28.662192 | orchestrator | 2025-06-22 19:56:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:31.715175 | orchestrator | 2025-06-22 19:56:31 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:31.726403 | orchestrator | 2025-06-22 19:56:31 | INFO  | Task a7f52d0d-6f0c-427e-a2e5-fedf70731d74 is in state SUCCESS 2025-06-22 19:56:31.733181 | orchestrator | 2025-06-22 19:56:31.733463 | orchestrator | None 2025-06-22 19:56:31.733494 | orchestrator | 2025-06-22 19:56:31.733515 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:56:31.733536 | orchestrator | 2025-06-22 19:56:31.733548 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:56:31.733560 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.273) 0:00:00.273 *********** 2025-06-22 19:56:31.733571 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.733583 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.733594 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.733605 | orchestrator | 2025-06-22 19:56:31.733616 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:56:31.733627 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.312) 0:00:00.585 *********** 2025-06-22 19:56:31.733639 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-22 19:56:31.733650 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-22 19:56:31.733661 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-22 19:56:31.733672 | orchestrator | 2025-06-22 19:56:31.733683 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-22 19:56:31.733693 | orchestrator | 2025-06-22 19:56:31.733705 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-22 19:56:31.733718 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:00.529) 0:00:01.115 *********** 2025-06-22 19:56:31.733731 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.733743 | orchestrator | 2025-06-22 19:56:31.733756 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-22 19:56:31.733768 | orchestrator | Sunday 22 June 2025 19:50:30 +0000 (0:00:01.310) 0:00:02.425 *********** 2025-06-22 19:56:31.733812 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.733825 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.733838 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.733958 | orchestrator | 2025-06-22 19:56:31.733973 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-22 19:56:31.733985 | orchestrator | Sunday 22 June 2025 19:50:31 +0000 (0:00:00.800) 0:00:03.225 *********** 2025-06-22 19:56:31.733999 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.734011 | orchestrator | 2025-06-22 19:56:31.734209 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-22 19:56:31.734230 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:01.604) 0:00:04.830 *********** 2025-06-22 19:56:31.734248 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.734266 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.734284 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.734303 | orchestrator | 2025-06-22 19:56:31.734366 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-22 19:56:31.734385 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:00.678) 0:00:05.508 *********** 2025-06-22 19:56:31.734402 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:56:31.734420 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:56:31.734439 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:56:31.734455 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:56:31.734472 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:56:31.734490 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:56:31.734588 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-22 19:56:31.734603 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-22 19:56:31.734614 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-22 19:56:31.734625 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-22 19:56:31.734636 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-22 19:56:31.734647 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-22 19:56:31.734685 | orchestrator | 2025-06-22 19:56:31.734698 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-22 19:56:31.734709 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:02.708) 0:00:08.217 *********** 2025-06-22 19:56:31.734720 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-22 19:56:31.734732 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-22 19:56:31.734792 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-22 19:56:31.734823 | orchestrator | 2025-06-22 19:56:31.734835 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-22 19:56:31.734845 | orchestrator | Sunday 22 June 2025 19:50:37 +0000 (0:00:00.820) 0:00:09.038 *********** 2025-06-22 19:56:31.734856 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-22 19:56:31.734867 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-22 19:56:31.734887 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-22 19:56:31.734901 | orchestrator | 2025-06-22 19:56:31.734913 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-22 19:56:31.734925 | orchestrator | Sunday 22 June 2025 19:50:39 +0000 (0:00:01.899) 0:00:10.938 *********** 2025-06-22 19:56:31.734938 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-22 19:56:31.734965 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.734998 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-22 19:56:31.735011 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.735023 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-22 19:56:31.735036 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.735048 | orchestrator | 2025-06-22 19:56:31.735060 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-22 19:56:31.735073 | orchestrator | Sunday 22 June 2025 19:50:40 +0000 (0:00:01.598) 0:00:12.537 *********** 2025-06-22 19:56:31.735089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.735111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.735125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.735138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.735151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.735179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.735202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:56:31.735215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:56:31.735228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:56:31.735241 | orchestrator | 2025-06-22 19:56:31.735306 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-22 19:56:31.735604 | orchestrator | Sunday 22 June 2025 19:50:42 +0000 (0:00:01.890) 0:00:14.428 *********** 2025-06-22 19:56:31.735620 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.735631 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.735642 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.735653 | orchestrator | 2025-06-22 19:56:31.735664 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-22 19:56:31.735675 | orchestrator | Sunday 22 June 2025 19:50:43 +0000 (0:00:01.029) 0:00:15.457 *********** 2025-06-22 19:56:31.735686 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-22 19:56:31.735697 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-22 19:56:31.735708 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-22 19:56:31.735718 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-22 19:56:31.735729 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-22 19:56:31.735740 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-22 19:56:31.735751 | orchestrator | 2025-06-22 19:56:31.735762 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-22 19:56:31.735773 | orchestrator | Sunday 22 June 2025 19:50:45 +0000 (0:00:01.971) 0:00:17.429 *********** 2025-06-22 19:56:31.735784 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.735795 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.735806 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.735817 | orchestrator | 2025-06-22 19:56:31.735828 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-22 19:56:31.735850 | orchestrator | Sunday 22 June 2025 19:50:46 +0000 (0:00:01.325) 0:00:18.755 *********** 2025-06-22 19:56:31.735873 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.735884 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.735894 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.735904 | orchestrator | 2025-06-22 19:56:31.735913 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-22 19:56:31.735923 | orchestrator | Sunday 22 June 2025 19:50:48 +0000 (0:00:01.289) 0:00:20.044 *********** 2025-06-22 19:56:31.735941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.735965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.736008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.736021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.736032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d14bace6930b1148a0e657d5051d6d546795b7f8', '__omit_place_holder__d14bace6930b1148a0e657d5051d6d546795b7f8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:56:31.736042 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.736053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.736070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.736091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d14bace6930b1148a0e657d5051d6d546795b7f8', '__omit_place_holder__d14bace6930b1148a0e657d5051d6d546795b7f8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:56:31.736102 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.736112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.736122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.736132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.736142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d14bace6930b1148a0e657d5051d6d546795b7f8', '__omit_place_holder__d14bace6930b1148a0e657d5051d6d546795b7f8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:56:31.736158 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.736168 | orchestrator | 2025-06-22 19:56:31.736178 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-22 19:56:31.736188 | orchestrator | Sunday 22 June 2025 19:50:49 +0000 (0:00:00.839) 0:00:20.884 *********** 2025-06-22 19:56:31.736197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.736217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.736228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.736284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.736296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.736329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.736351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d14bace6930b1148a0e657d5051d6d546795b7f8', '__omit_place_holder__d14bace6930b1148a0e657d5051d6d546795b7f8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:56:31.736361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.736384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d14bace6930b1148a0e657d5051d6d546795b7f8', '__omit_place_holder__d14bace6930b1148a0e657d5051d6d546795b7f8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:56:31.736395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.736405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.736415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d14bace6930b1148a0e657d5051d6d546795b7f8', '__omit_place_holder__d14bace6930b1148a0e657d5051d6d546795b7f8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:56:31.736431 | orchestrator | 2025-06-22 19:56:31.736441 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-22 19:56:31.736451 | orchestrator | Sunday 22 June 2025 19:50:52 +0000 (0:00:03.348) 0:00:24.232 *********** 2025-06-22 19:56:31.736461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.736471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.736493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.736504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.736515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.736536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.736546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:56:31.736556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:56:31.736566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:56:31.736576 | orchestrator | 2025-06-22 19:56:31.736586 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-22 19:56:31.736596 | orchestrator | Sunday 22 June 2025 19:50:55 +0000 (0:00:03.060) 0:00:27.293 *********** 2025-06-22 19:56:31.736606 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-22 19:56:31.736622 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-22 19:56:31.736632 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-22 19:56:31.736642 | orchestrator | 2025-06-22 19:56:31.736652 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-22 19:56:31.736661 | orchestrator | Sunday 22 June 2025 19:50:57 +0000 (0:00:01.764) 0:00:29.057 *********** 2025-06-22 19:56:31.736671 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-22 19:56:31.736704 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-22 19:56:31.736715 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-22 19:56:31.736803 | orchestrator | 2025-06-22 19:56:31.736813 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-22 19:56:31.736823 | orchestrator | Sunday 22 June 2025 19:51:00 +0000 (0:00:03.204) 0:00:32.262 *********** 2025-06-22 19:56:31.736832 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.736842 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.736859 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.736869 | orchestrator | 2025-06-22 19:56:31.736879 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-22 19:56:31.736889 | orchestrator | Sunday 22 June 2025 19:51:00 +0000 (0:00:00.484) 0:00:32.746 *********** 2025-06-22 19:56:31.736898 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-22 19:56:31.736909 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-22 19:56:31.736919 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-22 19:56:31.736929 | orchestrator | 2025-06-22 19:56:31.736939 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-22 19:56:31.736948 | orchestrator | Sunday 22 June 2025 19:51:04 +0000 (0:00:04.076) 0:00:36.823 *********** 2025-06-22 19:56:31.736958 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-22 19:56:31.736968 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-22 19:56:31.736977 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-22 19:56:31.736987 | orchestrator | 2025-06-22 19:56:31.736997 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-22 19:56:31.737007 | orchestrator | Sunday 22 June 2025 19:51:06 +0000 (0:00:01.655) 0:00:38.479 *********** 2025-06-22 19:56:31.737016 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-22 19:56:31.737026 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-22 19:56:31.737036 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-22 19:56:31.737045 | orchestrator | 2025-06-22 19:56:31.737055 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-22 19:56:31.737065 | orchestrator | Sunday 22 June 2025 19:51:07 +0000 (0:00:01.342) 0:00:39.821 *********** 2025-06-22 19:56:31.737074 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-22 19:56:31.737084 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-22 19:56:31.737094 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-22 19:56:31.737103 | orchestrator | 2025-06-22 19:56:31.737113 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-22 19:56:31.737122 | orchestrator | Sunday 22 June 2025 19:51:09 +0000 (0:00:01.509) 0:00:41.331 *********** 2025-06-22 19:56:31.737132 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.737142 | orchestrator | 2025-06-22 19:56:31.737151 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-22 19:56:31.737161 | orchestrator | Sunday 22 June 2025 19:51:10 +0000 (0:00:00.917) 0:00:42.249 *********** 2025-06-22 19:56:31.737176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.737196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.737213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.737223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.737234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.737299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.737360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:56:31.737377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:56:31.737402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:56:31.737413 | orchestrator | 2025-06-22 19:56:31.737423 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-22 19:56:31.737433 | orchestrator | Sunday 22 June 2025 19:51:14 +0000 (0:00:04.054) 0:00:46.303 *********** 2025-06-22 19:56:31.737443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.737453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.737463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.737473 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.737483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.737498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.737521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.737532 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.737542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.737552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.737562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.737572 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.737582 | orchestrator | 2025-06-22 19:56:31.737592 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-22 19:56:31.737602 | orchestrator | Sunday 22 June 2025 19:51:15 +0000 (0:00:00.638) 0:00:46.941 *********** 2025-06-22 19:56:31.737612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.737622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.737648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.737659 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.737670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.737680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.737690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.737700 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.737710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.737720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.737736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.737751 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.737761 | orchestrator | 2025-06-22 19:56:31.737771 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-22 19:56:31.737780 | orchestrator | Sunday 22 June 2025 19:51:16 +0000 (0:00:01.414) 0:00:48.356 *********** 2025-06-22 19:56:31.737809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.737820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.737830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.737840 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.737851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.737861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.737877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738003 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.738073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738108 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.738118 | orchestrator | 2025-06-22 19:56:31.738128 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-22 19:56:31.738138 | orchestrator | Sunday 22 June 2025 19:51:17 +0000 (0:00:00.620) 0:00:48.977 *********** 2025-06-22 19:56:31.738148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738190 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.738204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738242 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.738252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738289 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.738299 | orchestrator | 2025-06-22 19:56:31.738325 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-22 19:56:31.738347 | orchestrator | Sunday 22 June 2025 19:51:17 +0000 (0:00:00.697) 0:00:49.674 *********** 2025-06-22 19:56:31.738372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738411 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.738421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738458 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.738468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738508 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.738518 | orchestrator | 2025-06-22 19:56:31.738528 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-22 19:56:31.738538 | orchestrator | Sunday 22 June 2025 19:51:18 +0000 (0:00:01.139) 0:00:50.814 *********** 2025-06-22 19:56:31.738548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738584 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.738594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738635 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.738646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738766 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.738776 | orchestrator | 2025-06-22 19:56:31.738785 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-22 19:56:31.738796 | orchestrator | Sunday 22 June 2025 19:51:19 +0000 (0:00:00.591) 0:00:51.405 *********** 2025-06-22 19:56:31.738806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738848 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.738858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738894 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.738904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.738928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.738938 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.738948 | orchestrator | 2025-06-22 19:56:31.738958 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-22 19:56:31.738972 | orchestrator | Sunday 22 June 2025 19:51:20 +0000 (0:00:00.589) 0:00:51.995 *********** 2025-06-22 19:56:31.738983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.738999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.739009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.739019 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.739029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.739039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.739049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.739059 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.739079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:56:31.739090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:56:31.739106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:56:31.739116 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.739125 | orchestrator | 2025-06-22 19:56:31.739135 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-22 19:56:31.739145 | orchestrator | Sunday 22 June 2025 19:51:21 +0000 (0:00:01.122) 0:00:53.117 *********** 2025-06-22 19:56:31.739155 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-22 19:56:31.739165 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-22 19:56:31.739175 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-22 19:56:31.739184 | orchestrator | 2025-06-22 19:56:31.739194 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-22 19:56:31.739203 | orchestrator | Sunday 22 June 2025 19:51:22 +0000 (0:00:01.302) 0:00:54.419 *********** 2025-06-22 19:56:31.739213 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-22 19:56:31.739223 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-22 19:56:31.739232 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-22 19:56:31.739242 | orchestrator | 2025-06-22 19:56:31.739252 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-22 19:56:31.739261 | orchestrator | Sunday 22 June 2025 19:51:24 +0000 (0:00:01.479) 0:00:55.899 *********** 2025-06-22 19:56:31.739271 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 19:56:31.739281 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 19:56:31.739290 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 19:56:31.739300 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.739325 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 19:56:31.739335 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 19:56:31.739345 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.739355 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 19:56:31.739365 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.739374 | orchestrator | 2025-06-22 19:56:31.739384 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-22 19:56:31.739394 | orchestrator | Sunday 22 June 2025 19:51:26 +0000 (0:00:01.987) 0:00:57.887 *********** 2025-06-22 19:56:31.739424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.739441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.739557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:56:31.739569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.739579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.739589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:56:31.739604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:56:31.739627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:56:31.739638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:56:31.739648 | orchestrator | 2025-06-22 19:56:31.739658 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-22 19:56:31.739668 | orchestrator | Sunday 22 June 2025 19:51:28 +0000 (0:00:02.210) 0:01:00.097 *********** 2025-06-22 19:56:31.739677 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.739703 | orchestrator | 2025-06-22 19:56:31.739713 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-22 19:56:31.739722 | orchestrator | Sunday 22 June 2025 19:51:28 +0000 (0:00:00.653) 0:01:00.751 *********** 2025-06-22 19:56:31.739734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-22 19:56:31.739746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.739756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.739783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-22 19:56:31.739794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.739804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.739815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-22 19:56:31.739825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.739835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.739855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.739871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.739881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.739891 | orchestrator | 2025-06-22 19:56:31.739901 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-22 19:56:31.739911 | orchestrator | Sunday 22 June 2025 19:51:31 +0000 (0:00:02.979) 0:01:03.731 *********** 2025-06-22 19:56:31.739921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-22 19:56:31.739932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.739948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.739958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.739968 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.739984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-22 19:56:31.739994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.740005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-22 19:56:31.740051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740106 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.740122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.740138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740159 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.740169 | orchestrator | 2025-06-22 19:56:31.740179 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-22 19:56:31.740189 | orchestrator | Sunday 22 June 2025 19:51:32 +0000 (0:00:00.716) 0:01:04.448 *********** 2025-06-22 19:56:31.740199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:56:31.740209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:56:31.740220 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.740271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:56:31.740281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:56:31.740291 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.740336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:56:31.740347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:56:31.740357 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.740367 | orchestrator | 2025-06-22 19:56:31.740377 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-22 19:56:31.740387 | orchestrator | Sunday 22 June 2025 19:51:33 +0000 (0:00:01.077) 0:01:05.526 *********** 2025-06-22 19:56:31.740396 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.740406 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.740416 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.740425 | orchestrator | 2025-06-22 19:56:31.740435 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-22 19:56:31.740444 | orchestrator | Sunday 22 June 2025 19:51:34 +0000 (0:00:01.107) 0:01:06.633 *********** 2025-06-22 19:56:31.740454 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.740464 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.740473 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.740483 | orchestrator | 2025-06-22 19:56:31.740492 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-22 19:56:31.740502 | orchestrator | Sunday 22 June 2025 19:51:36 +0000 (0:00:01.775) 0:01:08.409 *********** 2025-06-22 19:56:31.740511 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.740521 | orchestrator | 2025-06-22 19:56:31.740531 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-22 19:56:31.740540 | orchestrator | Sunday 22 June 2025 19:51:37 +0000 (0:00:00.671) 0:01:09.081 *********** 2025-06-22 19:56:31.740574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.740587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.740615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.740673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740701 | orchestrator | 2025-06-22 19:56:31.740711 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-22 19:56:31.740721 | orchestrator | Sunday 22 June 2025 19:51:42 +0000 (0:00:05.136) 0:01:14.217 *********** 2025-06-22 19:56:31.740731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.740742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740773 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.740783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.740799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740819 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.740829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.740848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.740869 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.740879 | orchestrator | 2025-06-22 19:56:31.740894 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-22 19:56:31.740904 | orchestrator | Sunday 22 June 2025 19:51:43 +0000 (0:00:00.905) 0:01:15.123 *********** 2025-06-22 19:56:31.740914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:56:31.740925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:56:31.740935 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.740944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:56:31.740954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:56:31.740964 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.741087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:56:31.741100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:56:31.741110 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.741120 | orchestrator | 2025-06-22 19:56:31.741130 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-22 19:56:31.741139 | orchestrator | Sunday 22 June 2025 19:51:44 +0000 (0:00:00.973) 0:01:16.097 *********** 2025-06-22 19:56:31.741149 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.741158 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.741168 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.741177 | orchestrator | 2025-06-22 19:56:31.741187 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-22 19:56:31.741197 | orchestrator | Sunday 22 June 2025 19:51:46 +0000 (0:00:01.984) 0:01:18.082 *********** 2025-06-22 19:56:31.741206 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.741216 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.741225 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.741235 | orchestrator | 2025-06-22 19:56:31.741245 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-22 19:56:31.741254 | orchestrator | Sunday 22 June 2025 19:51:48 +0000 (0:00:01.968) 0:01:20.050 *********** 2025-06-22 19:56:31.741264 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.741273 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.741283 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.741292 | orchestrator | 2025-06-22 19:56:31.741302 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-22 19:56:31.741329 | orchestrator | Sunday 22 June 2025 19:51:48 +0000 (0:00:00.286) 0:01:20.337 *********** 2025-06-22 19:56:31.741339 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.741349 | orchestrator | 2025-06-22 19:56:31.741359 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-22 19:56:31.741368 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:00.608) 0:01:20.945 *********** 2025-06-22 19:56:31.741392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-22 19:56:31.741410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-22 19:56:31.741421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-22 19:56:31.741431 | orchestrator | 2025-06-22 19:56:31.741441 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-22 19:56:31.741450 | orchestrator | Sunday 22 June 2025 19:51:53 +0000 (0:00:04.369) 0:01:25.314 *********** 2025-06-22 19:56:31.741460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-22 19:56:31.741470 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.741484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-22 19:56:31.741501 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.741517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-22 19:56:31.741528 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.741537 | orchestrator | 2025-06-22 19:56:31.741547 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-22 19:56:31.741557 | orchestrator | Sunday 22 June 2025 19:51:55 +0000 (0:00:01.895) 0:01:27.210 *********** 2025-06-22 19:56:31.741567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:56:31.741579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:56:31.741589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:56:31.741600 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.741610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:56:31.741620 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.741630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:56:31.741641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:56:31.741657 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.741666 | orchestrator | 2025-06-22 19:56:31.741676 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-22 19:56:31.741685 | orchestrator | Sunday 22 June 2025 19:51:57 +0000 (0:00:02.651) 0:01:29.862 *********** 2025-06-22 19:56:31.741695 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.741704 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.741782 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.741792 | orchestrator | 2025-06-22 19:56:31.741806 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-22 19:56:31.741817 | orchestrator | Sunday 22 June 2025 19:51:58 +0000 (0:00:00.964) 0:01:30.827 *********** 2025-06-22 19:56:31.741826 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.741836 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.741846 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.741855 | orchestrator | 2025-06-22 19:56:31.741865 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-22 19:56:31.741880 | orchestrator | Sunday 22 June 2025 19:52:00 +0000 (0:00:01.341) 0:01:32.169 *********** 2025-06-22 19:56:31.741891 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.741900 | orchestrator | 2025-06-22 19:56:31.741910 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-22 19:56:31.741920 | orchestrator | Sunday 22 June 2025 19:52:01 +0000 (0:00:00.830) 0:01:32.999 *********** 2025-06-22 19:56:31.741930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.741941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.741952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.741968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.741988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.742143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742181 | orchestrator | 2025-06-22 19:56:31.742192 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-22 19:56:31.742202 | orchestrator | Sunday 22 June 2025 19:52:05 +0000 (0:00:04.469) 0:01:37.468 *********** 2025-06-22 19:56:31.742212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.742228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.742291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742381 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.742392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742402 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.742423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.742435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.742471 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.742481 | orchestrator | 2025-06-22 19:56:31.742492 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-22 19:56:31.742501 | orchestrator | Sunday 22 June 2025 19:52:06 +0000 (0:00:01.352) 0:01:38.821 *********** 2025-06-22 19:56:31.742512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:56:31.742522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:56:31.742532 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.742663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:56:31.742680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:56:31.742691 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.742706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:56:31.742717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:56:31.742727 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.742737 | orchestrator | 2025-06-22 19:56:31.742747 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-22 19:56:31.742756 | orchestrator | Sunday 22 June 2025 19:52:08 +0000 (0:00:01.184) 0:01:40.005 *********** 2025-06-22 19:56:31.742766 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.742799 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.742811 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.742821 | orchestrator | 2025-06-22 19:56:31.742831 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-22 19:56:31.742840 | orchestrator | Sunday 22 June 2025 19:52:09 +0000 (0:00:01.463) 0:01:41.469 *********** 2025-06-22 19:56:31.742850 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.742860 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.742870 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.742879 | orchestrator | 2025-06-22 19:56:31.742889 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-22 19:56:31.742905 | orchestrator | Sunday 22 June 2025 19:52:11 +0000 (0:00:01.988) 0:01:43.457 *********** 2025-06-22 19:56:31.742915 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.742925 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.742934 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.742944 | orchestrator | 2025-06-22 19:56:31.742954 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-22 19:56:31.742963 | orchestrator | Sunday 22 June 2025 19:52:12 +0000 (0:00:00.444) 0:01:43.901 *********** 2025-06-22 19:56:31.742973 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.742983 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.742992 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.743002 | orchestrator | 2025-06-22 19:56:31.743012 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-22 19:56:31.743021 | orchestrator | Sunday 22 June 2025 19:52:12 +0000 (0:00:00.312) 0:01:44.214 *********** 2025-06-22 19:56:31.743031 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.743041 | orchestrator | 2025-06-22 19:56:31.743050 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-22 19:56:31.743060 | orchestrator | Sunday 22 June 2025 19:52:13 +0000 (0:00:00.761) 0:01:44.975 *********** 2025-06-22 19:56:31.743071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 19:56:31.743082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:56:31.743097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 19:56:31.743150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:56:31.743171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 19:56:31.743279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:56:31.743296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743364 | orchestrator | 2025-06-22 19:56:31.743374 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-22 19:56:31.743384 | orchestrator | Sunday 22 June 2025 19:52:17 +0000 (0:00:03.924) 0:01:48.900 *********** 2025-06-22 19:56:31.743405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 19:56:31.743427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 19:56:31.743467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:56:31.743479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:56:31.743489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743623 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.743640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743650 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.743661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 19:56:31.743671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:56:31.743681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.743846 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.743856 | orchestrator | 2025-06-22 19:56:31.743866 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-22 19:56:31.743876 | orchestrator | Sunday 22 June 2025 19:52:17 +0000 (0:00:00.890) 0:01:49.790 *********** 2025-06-22 19:56:31.743886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:56:31.743896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:56:31.743906 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.743916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:56:31.743926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:56:31.743935 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.743945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:56:31.743955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:56:31.743964 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.743974 | orchestrator | 2025-06-22 19:56:31.743984 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-22 19:56:31.743994 | orchestrator | Sunday 22 June 2025 19:52:19 +0000 (0:00:01.228) 0:01:51.019 *********** 2025-06-22 19:56:31.744003 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.744020 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.744030 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.744040 | orchestrator | 2025-06-22 19:56:31.744050 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-22 19:56:31.744059 | orchestrator | Sunday 22 June 2025 19:52:21 +0000 (0:00:02.251) 0:01:53.270 *********** 2025-06-22 19:56:31.744069 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.744079 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.744088 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.744098 | orchestrator | 2025-06-22 19:56:31.744108 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-22 19:56:31.744117 | orchestrator | Sunday 22 June 2025 19:52:23 +0000 (0:00:02.049) 0:01:55.320 *********** 2025-06-22 19:56:31.744127 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.744137 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.744146 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.744156 | orchestrator | 2025-06-22 19:56:31.744166 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-22 19:56:31.744176 | orchestrator | Sunday 22 June 2025 19:52:23 +0000 (0:00:00.322) 0:01:55.642 *********** 2025-06-22 19:56:31.744186 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.744195 | orchestrator | 2025-06-22 19:56:31.744205 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-22 19:56:31.744215 | orchestrator | Sunday 22 June 2025 19:52:24 +0000 (0:00:00.848) 0:01:56.491 *********** 2025-06-22 19:56:31.744234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 19:56:31.744369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:56:31.744409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 19:56:31.744422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 19:56:31.744449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:56:31.744462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:56:31.744478 | orchestrator | 2025-06-22 19:56:31.744488 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-22 19:56:31.744498 | orchestrator | Sunday 22 June 2025 19:52:29 +0000 (0:00:04.780) 0:02:01.271 *********** 2025-06-22 19:56:31.744518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 19:56:31.744530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 19:56:31.744552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:56:31.744563 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.744584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:56:31.744596 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.744613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 19:56:31.744636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:56:31.744648 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.744657 | orchestrator | 2025-06-22 19:56:31.744667 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-22 19:56:31.744677 | orchestrator | Sunday 22 June 2025 19:52:32 +0000 (0:00:03.101) 0:02:04.373 *********** 2025-06-22 19:56:31.744688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:56:31.744704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:56:31.744715 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.744725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:56:31.744736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:56:31.744746 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.744760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:56:31.744775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:56:31.744786 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.744796 | orchestrator | 2025-06-22 19:56:31.744806 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-22 19:56:31.744815 | orchestrator | Sunday 22 June 2025 19:52:35 +0000 (0:00:03.300) 0:02:07.674 *********** 2025-06-22 19:56:31.744825 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.744835 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.744845 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.744854 | orchestrator | 2025-06-22 19:56:31.744864 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-22 19:56:31.744874 | orchestrator | Sunday 22 June 2025 19:52:37 +0000 (0:00:01.626) 0:02:09.301 *********** 2025-06-22 19:56:31.744889 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.744899 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.744906 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.744914 | orchestrator | 2025-06-22 19:56:31.744922 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-22 19:56:31.744930 | orchestrator | Sunday 22 June 2025 19:52:39 +0000 (0:00:02.008) 0:02:11.310 *********** 2025-06-22 19:56:31.744938 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.744946 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.744954 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.744962 | orchestrator | 2025-06-22 19:56:31.744969 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-22 19:56:31.744977 | orchestrator | Sunday 22 June 2025 19:52:39 +0000 (0:00:00.317) 0:02:11.627 *********** 2025-06-22 19:56:31.744985 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.744993 | orchestrator | 2025-06-22 19:56:31.745001 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-22 19:56:31.745009 | orchestrator | Sunday 22 June 2025 19:52:40 +0000 (0:00:00.861) 0:02:12.488 *********** 2025-06-22 19:56:31.745017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 19:56:31.745026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 19:56:31.745034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 19:56:31.745043 | orchestrator | 2025-06-22 19:56:31.745054 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-22 19:56:31.745063 | orchestrator | Sunday 22 June 2025 19:52:43 +0000 (0:00:03.124) 0:02:15.613 *********** 2025-06-22 19:56:31.745075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 19:56:31.745088 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.745097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 19:56:31.745105 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.745113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 19:56:31.745121 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.745129 | orchestrator | 2025-06-22 19:56:31.745137 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-22 19:56:31.745145 | orchestrator | Sunday 22 June 2025 19:52:44 +0000 (0:00:00.453) 0:02:16.067 *********** 2025-06-22 19:56:31.745153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:56:31.745161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:56:31.745169 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.745177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:56:31.745185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:56:31.745193 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.745201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:56:31.745209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:56:31.745217 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.745225 | orchestrator | 2025-06-22 19:56:31.745233 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-22 19:56:31.745241 | orchestrator | Sunday 22 June 2025 19:52:44 +0000 (0:00:00.632) 0:02:16.699 *********** 2025-06-22 19:56:31.745249 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.745262 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.745273 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.745281 | orchestrator | 2025-06-22 19:56:31.745289 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-22 19:56:31.745297 | orchestrator | Sunday 22 June 2025 19:52:46 +0000 (0:00:01.532) 0:02:18.231 *********** 2025-06-22 19:56:31.745305 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.745343 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.745351 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.745359 | orchestrator | 2025-06-22 19:56:31.745372 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-22 19:56:31.745380 | orchestrator | Sunday 22 June 2025 19:52:48 +0000 (0:00:02.060) 0:02:20.292 *********** 2025-06-22 19:56:31.745388 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.745396 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.745404 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.745412 | orchestrator | 2025-06-22 19:56:31.745420 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-22 19:56:31.745428 | orchestrator | Sunday 22 June 2025 19:52:48 +0000 (0:00:00.311) 0:02:20.603 *********** 2025-06-22 19:56:31.745436 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.745443 | orchestrator | 2025-06-22 19:56:31.745451 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-22 19:56:31.745459 | orchestrator | Sunday 22 June 2025 19:52:49 +0000 (0:00:00.906) 0:02:21.510 *********** 2025-06-22 19:56:31.745468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 19:56:31.746179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 19:56:31.746295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 19:56:31.746371 | orchestrator | 2025-06-22 19:56:31.746397 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-22 19:56:31.746417 | orchestrator | Sunday 22 June 2025 19:52:53 +0000 (0:00:03.779) 0:02:25.290 *********** 2025-06-22 19:56:31.746475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 19:56:31.746490 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.746503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 19:56:31.746522 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.746548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 19:56:31.746561 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.746572 | orchestrator | 2025-06-22 19:56:31.746583 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-22 19:56:31.746594 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:00.679) 0:02:25.970 *********** 2025-06-22 19:56:31.746606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:56:31.746619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:56:31.746632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:56:31.746651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:56:31.746663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-22 19:56:31.746675 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.746686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:56:31.746702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:56:31.746720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:56:31.746732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:56:31.746742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-22 19:56:31.746753 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.746764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:56:31.746776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:56:31.746786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:56:31.746798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:56:31.746816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-22 19:56:31.746827 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.746838 | orchestrator | 2025-06-22 19:56:31.746850 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-22 19:56:31.746861 | orchestrator | Sunday 22 June 2025 19:52:55 +0000 (0:00:01.090) 0:02:27.061 *********** 2025-06-22 19:56:31.746871 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.746882 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.746893 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.746903 | orchestrator | 2025-06-22 19:56:31.746914 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-22 19:56:31.746925 | orchestrator | Sunday 22 June 2025 19:52:56 +0000 (0:00:01.672) 0:02:28.734 *********** 2025-06-22 19:56:31.746936 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.746946 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.746957 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.746967 | orchestrator | 2025-06-22 19:56:31.746978 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-22 19:56:31.746989 | orchestrator | Sunday 22 June 2025 19:52:58 +0000 (0:00:02.080) 0:02:30.814 *********** 2025-06-22 19:56:31.747000 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.747010 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.747021 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.747036 | orchestrator | 2025-06-22 19:56:31.747058 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-22 19:56:31.747086 | orchestrator | Sunday 22 June 2025 19:52:59 +0000 (0:00:00.344) 0:02:31.159 *********** 2025-06-22 19:56:31.747102 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.747119 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.747135 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.747152 | orchestrator | 2025-06-22 19:56:31.747172 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-22 19:56:31.747189 | orchestrator | Sunday 22 June 2025 19:52:59 +0000 (0:00:00.300) 0:02:31.460 *********** 2025-06-22 19:56:31.747206 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.747217 | orchestrator | 2025-06-22 19:56:31.747235 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-22 19:56:31.747246 | orchestrator | Sunday 22 June 2025 19:53:00 +0000 (0:00:01.271) 0:02:32.732 *********** 2025-06-22 19:56:31.747280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 19:56:31.747295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:56:31.747345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:56:31.747360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 19:56:31.747372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:56:31.747396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:56:31.747409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 19:56:31.747428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:56:31.747440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:56:31.747452 | orchestrator | 2025-06-22 19:56:31.747463 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-22 19:56:31.747474 | orchestrator | Sunday 22 June 2025 19:53:04 +0000 (0:00:03.771) 0:02:36.503 *********** 2025-06-22 19:56:31.747486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 19:56:31.747509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:56:31.747521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:56:31.747539 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.747551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 19:56:31.747563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:56:31.747575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:56:31.747585 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.747608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 19:56:31.747621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:56:31.747643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:56:31.747654 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.747665 | orchestrator | 2025-06-22 19:56:31.747676 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-22 19:56:31.747687 | orchestrator | Sunday 22 June 2025 19:53:05 +0000 (0:00:00.585) 0:02:37.089 *********** 2025-06-22 19:56:31.747699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:56:31.747710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:56:31.747721 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.747733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:56:31.747744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:56:31.747755 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.747766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:56:31.747778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:56:31.747789 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.747799 | orchestrator | 2025-06-22 19:56:31.747810 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-22 19:56:31.747821 | orchestrator | Sunday 22 June 2025 19:53:06 +0000 (0:00:01.018) 0:02:38.107 *********** 2025-06-22 19:56:31.747832 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.747842 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.747853 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.747864 | orchestrator | 2025-06-22 19:56:31.747875 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-22 19:56:31.747890 | orchestrator | Sunday 22 June 2025 19:53:07 +0000 (0:00:01.162) 0:02:39.270 *********** 2025-06-22 19:56:31.747901 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.747912 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.747922 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.747947 | orchestrator | 2025-06-22 19:56:31.747959 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-22 19:56:31.747975 | orchestrator | Sunday 22 June 2025 19:53:09 +0000 (0:00:02.220) 0:02:41.491 *********** 2025-06-22 19:56:31.747986 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.747997 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.748008 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.748018 | orchestrator | 2025-06-22 19:56:31.748029 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-22 19:56:31.748040 | orchestrator | Sunday 22 June 2025 19:53:09 +0000 (0:00:00.361) 0:02:41.853 *********** 2025-06-22 19:56:31.748051 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.748062 | orchestrator | 2025-06-22 19:56:31.748072 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-22 19:56:31.748083 | orchestrator | Sunday 22 June 2025 19:53:11 +0000 (0:00:01.481) 0:02:43.334 *********** 2025-06-22 19:56:31.748095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 19:56:31.748108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 19:56:31.748136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 19:56:31.748173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748185 | orchestrator | 2025-06-22 19:56:31.748196 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-22 19:56:31.748207 | orchestrator | Sunday 22 June 2025 19:53:14 +0000 (0:00:03.467) 0:02:46.801 *********** 2025-06-22 19:56:31.748219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 19:56:31.748231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748249 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.748270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 19:56:31.748282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748293 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.748304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 19:56:31.748347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748359 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.748370 | orchestrator | 2025-06-22 19:56:31.748381 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-22 19:56:31.748392 | orchestrator | Sunday 22 June 2025 19:53:15 +0000 (0:00:00.677) 0:02:47.479 *********** 2025-06-22 19:56:31.748404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:56:31.748423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:56:31.748434 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.748445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:56:31.748456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:56:31.748467 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.748483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:56:31.748495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:56:31.748512 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.748524 | orchestrator | 2025-06-22 19:56:31.748535 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-22 19:56:31.748545 | orchestrator | Sunday 22 June 2025 19:53:17 +0000 (0:00:01.442) 0:02:48.921 *********** 2025-06-22 19:56:31.748557 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.748567 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.748578 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.748590 | orchestrator | 2025-06-22 19:56:31.748600 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-22 19:56:31.748611 | orchestrator | Sunday 22 June 2025 19:53:18 +0000 (0:00:01.504) 0:02:50.425 *********** 2025-06-22 19:56:31.748622 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.748632 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.748643 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.748653 | orchestrator | 2025-06-22 19:56:31.748664 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-22 19:56:31.748675 | orchestrator | Sunday 22 June 2025 19:53:20 +0000 (0:00:02.172) 0:02:52.598 *********** 2025-06-22 19:56:31.748686 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.748697 | orchestrator | 2025-06-22 19:56:31.748707 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-22 19:56:31.748718 | orchestrator | Sunday 22 June 2025 19:53:21 +0000 (0:00:01.033) 0:02:53.631 *********** 2025-06-22 19:56:31.748729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-22 19:56:31.748741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-22 19:56:31.748807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-22 19:56:31.748819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748906 | orchestrator | 2025-06-22 19:56:31.748917 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-22 19:56:31.748928 | orchestrator | Sunday 22 June 2025 19:53:25 +0000 (0:00:03.593) 0:02:57.225 *********** 2025-06-22 19:56:31.748939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-22 19:56:31.748957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.748984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.749003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-22 19:56:31.749014 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.749026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.749037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.749055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.749066 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.749077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-22 19:56:31.749094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.749106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.749144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.749167 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.749177 | orchestrator | 2025-06-22 19:56:31.749188 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-22 19:56:31.749199 | orchestrator | Sunday 22 June 2025 19:53:26 +0000 (0:00:00.685) 0:02:57.911 *********** 2025-06-22 19:56:31.749210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:56:31.749222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:56:31.749233 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.749244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:56:31.749255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:56:31.749266 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.749277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:56:31.749288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:56:31.749299 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.749357 | orchestrator | 2025-06-22 19:56:31.749371 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-22 19:56:31.749382 | orchestrator | Sunday 22 June 2025 19:53:26 +0000 (0:00:00.882) 0:02:58.793 *********** 2025-06-22 19:56:31.749393 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.749404 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.749415 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.749426 | orchestrator | 2025-06-22 19:56:31.749437 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-22 19:56:31.749447 | orchestrator | Sunday 22 June 2025 19:53:28 +0000 (0:00:01.528) 0:03:00.322 *********** 2025-06-22 19:56:31.749459 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.749470 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.749481 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.749491 | orchestrator | 2025-06-22 19:56:31.749502 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-22 19:56:31.749513 | orchestrator | Sunday 22 June 2025 19:53:30 +0000 (0:00:02.025) 0:03:02.347 *********** 2025-06-22 19:56:31.749524 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.749535 | orchestrator | 2025-06-22 19:56:31.749545 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-22 19:56:31.749561 | orchestrator | Sunday 22 June 2025 19:53:31 +0000 (0:00:00.979) 0:03:03.327 *********** 2025-06-22 19:56:31.749573 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 19:56:31.749584 | orchestrator | 2025-06-22 19:56:31.749595 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-22 19:56:31.749605 | orchestrator | Sunday 22 June 2025 19:53:33 +0000 (0:00:02.535) 0:03:05.862 *********** 2025-06-22 19:56:31.749628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:56:31.749650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:56:31.749662 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.749685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:56:31.749705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:56:31.749717 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.749728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:56:31.749741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:56:31.749752 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.749763 | orchestrator | 2025-06-22 19:56:31.749773 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-22 19:56:31.749785 | orchestrator | Sunday 22 June 2025 19:53:36 +0000 (0:00:02.660) 0:03:08.522 *********** 2025-06-22 19:56:31.749808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:56:31.749828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:56:31.749841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:56:31.749853 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.749874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:56:31.749892 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.749904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:56:31.749917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:56:31.749928 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.749939 | orchestrator | 2025-06-22 19:56:31.749950 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-22 19:56:31.749961 | orchestrator | Sunday 22 June 2025 19:53:38 +0000 (0:00:01.968) 0:03:10.490 *********** 2025-06-22 19:56:31.749973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:56:31.750206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:56:31.750240 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.750252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:56:31.750264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:56:31.750275 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.750287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:56:31.750298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:56:31.750331 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.750344 | orchestrator | 2025-06-22 19:56:31.750356 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-22 19:56:31.750367 | orchestrator | Sunday 22 June 2025 19:53:41 +0000 (0:00:02.912) 0:03:13.403 *********** 2025-06-22 19:56:31.750377 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.750388 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.750399 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.750410 | orchestrator | 2025-06-22 19:56:31.750421 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-22 19:56:31.750432 | orchestrator | Sunday 22 June 2025 19:53:43 +0000 (0:00:02.061) 0:03:15.465 *********** 2025-06-22 19:56:31.750442 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.750453 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.750471 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.750481 | orchestrator | 2025-06-22 19:56:31.750492 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-22 19:56:31.750503 | orchestrator | Sunday 22 June 2025 19:53:44 +0000 (0:00:01.340) 0:03:16.805 *********** 2025-06-22 19:56:31.750513 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.750524 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.750535 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.750546 | orchestrator | 2025-06-22 19:56:31.750556 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-22 19:56:31.750567 | orchestrator | Sunday 22 June 2025 19:53:45 +0000 (0:00:00.316) 0:03:17.122 *********** 2025-06-22 19:56:31.750578 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.750588 | orchestrator | 2025-06-22 19:56:31.750599 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-22 19:56:31.750614 | orchestrator | Sunday 22 June 2025 19:53:46 +0000 (0:00:00.997) 0:03:18.119 *********** 2025-06-22 19:56:31.750635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-22 19:56:31.750649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-22 19:56:31.750661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-22 19:56:31.750672 | orchestrator | 2025-06-22 19:56:31.750683 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-22 19:56:31.750694 | orchestrator | Sunday 22 June 2025 19:53:47 +0000 (0:00:01.661) 0:03:19.780 *********** 2025-06-22 19:56:31.750705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-22 19:56:31.750724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-22 19:56:31.750735 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.750751 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.750770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-22 19:56:31.750783 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.750795 | orchestrator | 2025-06-22 19:56:31.750808 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-22 19:56:31.750820 | orchestrator | Sunday 22 June 2025 19:53:48 +0000 (0:00:00.349) 0:03:20.130 *********** 2025-06-22 19:56:31.750834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-22 19:56:31.750846 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.750859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-22 19:56:31.750871 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.750883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-22 19:56:31.750897 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.750909 | orchestrator | 2025-06-22 19:56:31.750921 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-22 19:56:31.750934 | orchestrator | Sunday 22 June 2025 19:53:48 +0000 (0:00:00.537) 0:03:20.667 *********** 2025-06-22 19:56:31.750946 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.750958 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.750970 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.750993 | orchestrator | 2025-06-22 19:56:31.751006 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-22 19:56:31.751018 | orchestrator | Sunday 22 June 2025 19:53:49 +0000 (0:00:00.653) 0:03:21.321 *********** 2025-06-22 19:56:31.751030 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.751042 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.751054 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.751066 | orchestrator | 2025-06-22 19:56:31.751078 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-22 19:56:31.751090 | orchestrator | Sunday 22 June 2025 19:53:50 +0000 (0:00:01.141) 0:03:22.463 *********** 2025-06-22 19:56:31.751102 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.751114 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.751126 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.751138 | orchestrator | 2025-06-22 19:56:31.751149 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-22 19:56:31.751160 | orchestrator | Sunday 22 June 2025 19:53:50 +0000 (0:00:00.281) 0:03:22.744 *********** 2025-06-22 19:56:31.751171 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.751181 | orchestrator | 2025-06-22 19:56:31.751192 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-22 19:56:31.751203 | orchestrator | Sunday 22 June 2025 19:53:52 +0000 (0:00:01.230) 0:03:23.974 *********** 2025-06-22 19:56:31.751219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 19:56:31.751238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:56:31.751292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 19:56:31.751342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.751354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 19:56:31.751372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.751384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:56:31.751447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:56:31.751515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.751544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:56:31.751556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:56:31.751572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.751633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:56:31.751646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.751657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.751678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:56:31.751691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.751711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:56:31.751744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:56:31.751796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:56:31.751807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.751830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:56:31.751849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.751885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:56:31.751898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:56:31.751909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:56:31.751960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:56:31.751972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.751983 | orchestrator | 2025-06-22 19:56:31.751994 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-22 19:56:31.752005 | orchestrator | Sunday 22 June 2025 19:53:55 +0000 (0:00:03.761) 0:03:27.735 *********** 2025-06-22 19:56:31.752017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 19:56:31.752029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 19:56:31.752092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:56:31.752103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.752166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.752189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:56:31.752205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:56:31.752252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.752263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.752286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:56:31.752374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.752422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:56:31.752434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:56:31.752651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 19:56:31.752670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:56:31.752682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:56:31.752694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.752705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752818 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.752829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:56:31.752857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:56:31.752930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:56:31.752946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.752966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.752977 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.752987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.752997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.753019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:56:31.753056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.753069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:56:31.753079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:56:31.753089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.753099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:56:31.753122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:56:31.753158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.753169 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.753180 | orchestrator | 2025-06-22 19:56:31.753190 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-22 19:56:31.753200 | orchestrator | Sunday 22 June 2025 19:53:57 +0000 (0:00:01.380) 0:03:29.116 *********** 2025-06-22 19:56:31.753210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:56:31.753221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:56:31.753230 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.753240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:56:31.753250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:56:31.753260 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.753269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:56:31.753279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:56:31.753296 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.753306 | orchestrator | 2025-06-22 19:56:31.753336 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-22 19:56:31.753346 | orchestrator | Sunday 22 June 2025 19:53:58 +0000 (0:00:01.700) 0:03:30.817 *********** 2025-06-22 19:56:31.753356 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.753365 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.753375 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.753384 | orchestrator | 2025-06-22 19:56:31.753394 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-22 19:56:31.753403 | orchestrator | Sunday 22 June 2025 19:54:00 +0000 (0:00:01.162) 0:03:31.979 *********** 2025-06-22 19:56:31.753413 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.753422 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.753432 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.753442 | orchestrator | 2025-06-22 19:56:31.753451 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-22 19:56:31.753461 | orchestrator | Sunday 22 June 2025 19:54:02 +0000 (0:00:02.108) 0:03:34.087 *********** 2025-06-22 19:56:31.753471 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.753480 | orchestrator | 2025-06-22 19:56:31.753490 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-22 19:56:31.753499 | orchestrator | Sunday 22 June 2025 19:54:03 +0000 (0:00:01.140) 0:03:35.228 *********** 2025-06-22 19:56:31.753510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.753551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.753566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.753583 | orchestrator | 2025-06-22 19:56:31.753594 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-22 19:56:31.753605 | orchestrator | Sunday 22 June 2025 19:54:06 +0000 (0:00:03.116) 0:03:38.344 *********** 2025-06-22 19:56:31.753616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.753628 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.753723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.753749 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.753797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.753810 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.753821 | orchestrator | 2025-06-22 19:56:31.753832 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-22 19:56:31.753843 | orchestrator | Sunday 22 June 2025 19:54:06 +0000 (0:00:00.465) 0:03:38.809 *********** 2025-06-22 19:56:31.753853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:56:31.753872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:56:31.753884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:56:31.753896 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.753907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:56:31.753918 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.753929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:56:31.753941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:56:31.753950 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.753960 | orchestrator | 2025-06-22 19:56:31.753970 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-22 19:56:31.753979 | orchestrator | Sunday 22 June 2025 19:54:07 +0000 (0:00:00.651) 0:03:39.461 *********** 2025-06-22 19:56:31.753989 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.753999 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.754008 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.754048 | orchestrator | 2025-06-22 19:56:31.754058 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-22 19:56:31.754068 | orchestrator | Sunday 22 June 2025 19:54:08 +0000 (0:00:01.337) 0:03:40.798 *********** 2025-06-22 19:56:31.754077 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.754087 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.754097 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.754106 | orchestrator | 2025-06-22 19:56:31.754116 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-22 19:56:31.754126 | orchestrator | Sunday 22 June 2025 19:54:11 +0000 (0:00:02.098) 0:03:42.897 *********** 2025-06-22 19:56:31.754135 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.754145 | orchestrator | 2025-06-22 19:56:31.754155 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-22 19:56:31.754165 | orchestrator | Sunday 22 June 2025 19:54:12 +0000 (0:00:01.273) 0:03:44.171 *********** 2025-06-22 19:56:31.754209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.754230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.754241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.754252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.754296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.754337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.754348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.754358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.754368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.754379 | orchestrator | 2025-06-22 19:56:31.754388 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-22 19:56:31.754398 | orchestrator | Sunday 22 June 2025 19:54:16 +0000 (0:00:03.847) 0:03:48.019 *********** 2025-06-22 19:56:31.754414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.754460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.754472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.754482 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.754493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.754504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.754514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.754525 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.754565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.754598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.754609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.754619 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.754630 | orchestrator | 2025-06-22 19:56:31.754640 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-22 19:56:31.754650 | orchestrator | Sunday 22 June 2025 19:54:16 +0000 (0:00:00.749) 0:03:48.768 *********** 2025-06-22 19:56:31.754659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:56:31.754670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:56:31.754682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:56:31.754692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:56:31.754703 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.754713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:56:31.754723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:56:31.754740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:56:31.754755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:56:31.754765 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.754802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:56:31.754814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:56:31.754824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:56:31.754834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:56:31.754844 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.754854 | orchestrator | 2025-06-22 19:56:31.754864 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-22 19:56:31.754874 | orchestrator | Sunday 22 June 2025 19:54:17 +0000 (0:00:00.739) 0:03:49.507 *********** 2025-06-22 19:56:31.754883 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.754893 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.754903 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.754912 | orchestrator | 2025-06-22 19:56:31.754922 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-22 19:56:31.754933 | orchestrator | Sunday 22 June 2025 19:54:19 +0000 (0:00:01.394) 0:03:50.902 *********** 2025-06-22 19:56:31.754943 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.754952 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.754961 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.754971 | orchestrator | 2025-06-22 19:56:31.754981 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-22 19:56:31.754991 | orchestrator | Sunday 22 June 2025 19:54:20 +0000 (0:00:01.829) 0:03:52.731 *********** 2025-06-22 19:56:31.755000 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.755010 | orchestrator | 2025-06-22 19:56:31.755020 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-22 19:56:31.755029 | orchestrator | Sunday 22 June 2025 19:54:22 +0000 (0:00:01.340) 0:03:54.071 *********** 2025-06-22 19:56:31.755039 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-22 19:56:31.755049 | orchestrator | 2025-06-22 19:56:31.755059 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-22 19:56:31.755068 | orchestrator | Sunday 22 June 2025 19:54:23 +0000 (0:00:00.926) 0:03:54.998 *********** 2025-06-22 19:56:31.755079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-22 19:56:31.755097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-22 19:56:31.755108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-22 19:56:31.755119 | orchestrator | 2025-06-22 19:56:31.755129 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-22 19:56:31.755147 | orchestrator | Sunday 22 June 2025 19:54:26 +0000 (0:00:03.465) 0:03:58.464 *********** 2025-06-22 19:56:31.755185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:56:31.755197 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.755207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:56:31.755217 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.755227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:56:31.755237 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.755247 | orchestrator | 2025-06-22 19:56:31.755257 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-22 19:56:31.755267 | orchestrator | Sunday 22 June 2025 19:54:27 +0000 (0:00:01.039) 0:03:59.503 *********** 2025-06-22 19:56:31.755277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:56:31.755287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:56:31.755303 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.755364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:56:31.755376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:56:31.755386 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.755396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:56:31.755406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:56:31.755417 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.755426 | orchestrator | 2025-06-22 19:56:31.755436 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-22 19:56:31.755446 | orchestrator | Sunday 22 June 2025 19:54:29 +0000 (0:00:01.680) 0:04:01.183 *********** 2025-06-22 19:56:31.755456 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.755466 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.755475 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.755485 | orchestrator | 2025-06-22 19:56:31.755495 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-22 19:56:31.755504 | orchestrator | Sunday 22 June 2025 19:54:31 +0000 (0:00:02.203) 0:04:03.386 *********** 2025-06-22 19:56:31.755514 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.755524 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.755533 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.755543 | orchestrator | 2025-06-22 19:56:31.755553 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-22 19:56:31.755568 | orchestrator | Sunday 22 June 2025 19:54:34 +0000 (0:00:02.980) 0:04:06.367 *********** 2025-06-22 19:56:31.755579 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-22 19:56:31.755589 | orchestrator | 2025-06-22 19:56:31.755599 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-22 19:56:31.755641 | orchestrator | Sunday 22 June 2025 19:54:35 +0000 (0:00:00.879) 0:04:07.247 *********** 2025-06-22 19:56:31.755653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:56:31.755663 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.755673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:56:31.755691 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.755701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:56:31.755711 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.755721 | orchestrator | 2025-06-22 19:56:31.755731 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-22 19:56:31.755741 | orchestrator | Sunday 22 June 2025 19:54:37 +0000 (0:00:01.660) 0:04:08.907 *********** 2025-06-22 19:56:31.755750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:56:31.755761 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.755770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:56:31.755781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:56:31.755791 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.755801 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.755810 | orchestrator | 2025-06-22 19:56:31.755820 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-22 19:56:31.755835 | orchestrator | Sunday 22 June 2025 19:54:38 +0000 (0:00:01.767) 0:04:10.675 *********** 2025-06-22 19:56:31.755845 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.755854 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.755864 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.755874 | orchestrator | 2025-06-22 19:56:31.755882 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-22 19:56:31.755911 | orchestrator | Sunday 22 June 2025 19:54:40 +0000 (0:00:01.476) 0:04:12.151 *********** 2025-06-22 19:56:31.755921 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.755929 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.755937 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.755945 | orchestrator | 2025-06-22 19:56:31.755953 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-22 19:56:31.755961 | orchestrator | Sunday 22 June 2025 19:54:42 +0000 (0:00:02.447) 0:04:14.599 *********** 2025-06-22 19:56:31.755975 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.755983 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.755991 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.755999 | orchestrator | 2025-06-22 19:56:31.756007 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-22 19:56:31.756015 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:02.973) 0:04:17.572 *********** 2025-06-22 19:56:31.756023 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-22 19:56:31.756031 | orchestrator | 2025-06-22 19:56:31.756039 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-22 19:56:31.756047 | orchestrator | Sunday 22 June 2025 19:54:46 +0000 (0:00:00.891) 0:04:18.464 *********** 2025-06-22 19:56:31.756055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:56:31.756063 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.756072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:56:31.756080 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.756088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:56:31.756096 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.756104 | orchestrator | 2025-06-22 19:56:31.756112 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-22 19:56:31.756120 | orchestrator | Sunday 22 June 2025 19:54:47 +0000 (0:00:01.012) 0:04:19.476 *********** 2025-06-22 19:56:31.756128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:56:31.756136 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.756148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:56:31.756162 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.756192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:56:31.756202 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.756210 | orchestrator | 2025-06-22 19:56:31.756218 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-22 19:56:31.756226 | orchestrator | Sunday 22 June 2025 19:54:49 +0000 (0:00:01.525) 0:04:21.002 *********** 2025-06-22 19:56:31.756234 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.756242 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.756250 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.756258 | orchestrator | 2025-06-22 19:56:31.756266 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-22 19:56:31.756274 | orchestrator | Sunday 22 June 2025 19:54:50 +0000 (0:00:01.796) 0:04:22.799 *********** 2025-06-22 19:56:31.756282 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.756290 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.756298 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.756319 | orchestrator | 2025-06-22 19:56:31.756329 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-22 19:56:31.756337 | orchestrator | Sunday 22 June 2025 19:54:53 +0000 (0:00:02.183) 0:04:24.983 *********** 2025-06-22 19:56:31.756345 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.756353 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.756361 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.756369 | orchestrator | 2025-06-22 19:56:31.756377 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-22 19:56:31.756385 | orchestrator | Sunday 22 June 2025 19:54:55 +0000 (0:00:02.745) 0:04:27.728 *********** 2025-06-22 19:56:31.756393 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.756401 | orchestrator | 2025-06-22 19:56:31.756409 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-22 19:56:31.756417 | orchestrator | Sunday 22 June 2025 19:54:57 +0000 (0:00:01.217) 0:04:28.946 *********** 2025-06-22 19:56:31.756425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.756435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.756453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:56:31.756485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:56:31.756494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.756503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.756512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.756520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.756534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.756567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.756577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.756586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:56:31.756594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.756603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.756616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.756625 | orchestrator | 2025-06-22 19:56:31.756633 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-22 19:56:31.756641 | orchestrator | Sunday 22 June 2025 19:55:00 +0000 (0:00:03.213) 0:04:32.160 *********** 2025-06-22 19:56:31.756674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.756684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:56:31.756693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.756701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.756715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.756723 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.756737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.756766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:56:31.756776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.756784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.756793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.756806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.756814 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.756823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:56:31.756855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.756865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:56:31.756874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:56:31.756882 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.756890 | orchestrator | 2025-06-22 19:56:31.756898 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-22 19:56:31.756906 | orchestrator | Sunday 22 June 2025 19:55:00 +0000 (0:00:00.653) 0:04:32.813 *********** 2025-06-22 19:56:31.756914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:56:31.756928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:56:31.756936 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.756944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:56:31.756952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:56:31.756960 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.756969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:56:31.756977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:56:31.756985 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.756993 | orchestrator | 2025-06-22 19:56:31.757001 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-22 19:56:31.757009 | orchestrator | Sunday 22 June 2025 19:55:01 +0000 (0:00:00.773) 0:04:33.586 *********** 2025-06-22 19:56:31.757017 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.757025 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.757033 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.757041 | orchestrator | 2025-06-22 19:56:31.757048 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-22 19:56:31.757056 | orchestrator | Sunday 22 June 2025 19:55:03 +0000 (0:00:01.529) 0:04:35.116 *********** 2025-06-22 19:56:31.757064 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.757072 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.757080 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.757088 | orchestrator | 2025-06-22 19:56:31.757096 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-22 19:56:31.757104 | orchestrator | Sunday 22 June 2025 19:55:05 +0000 (0:00:01.817) 0:04:36.934 *********** 2025-06-22 19:56:31.757115 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.757123 | orchestrator | 2025-06-22 19:56:31.757131 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-22 19:56:31.757139 | orchestrator | Sunday 22 June 2025 19:55:06 +0000 (0:00:01.313) 0:04:38.247 *********** 2025-06-22 19:56:31.757170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:56:31.757181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:56:31.757195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:56:31.757205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:56:31.757238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:56:31.757250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:56:31.757265 | orchestrator | 2025-06-22 19:56:31.757273 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-22 19:56:31.757281 | orchestrator | Sunday 22 June 2025 19:55:11 +0000 (0:00:04.918) 0:04:43.166 *********** 2025-06-22 19:56:31.757289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:56:31.757298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:56:31.757323 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.757359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:56:31.757370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:56:31.757384 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.757393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:56:31.757401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:56:31.757410 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.757418 | orchestrator | 2025-06-22 19:56:31.757426 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-22 19:56:31.757434 | orchestrator | Sunday 22 June 2025 19:55:12 +0000 (0:00:00.778) 0:04:43.944 *********** 2025-06-22 19:56:31.757446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-22 19:56:31.757474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:56:31.757484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-22 19:56:31.757492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:56:31.757505 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.757513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:56:31.757522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:56:31.757530 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.757537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-22 19:56:31.757545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:56:31.757554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:56:31.757562 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.757570 | orchestrator | 2025-06-22 19:56:31.757578 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-22 19:56:31.757585 | orchestrator | Sunday 22 June 2025 19:55:12 +0000 (0:00:00.807) 0:04:44.751 *********** 2025-06-22 19:56:31.757593 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.757601 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.757609 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.757616 | orchestrator | 2025-06-22 19:56:31.757624 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-22 19:56:31.757632 | orchestrator | Sunday 22 June 2025 19:55:13 +0000 (0:00:00.411) 0:04:45.162 *********** 2025-06-22 19:56:31.757640 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.757648 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.757656 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.757664 | orchestrator | 2025-06-22 19:56:31.757671 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-22 19:56:31.757679 | orchestrator | Sunday 22 June 2025 19:55:14 +0000 (0:00:01.303) 0:04:46.465 *********** 2025-06-22 19:56:31.757687 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.757695 | orchestrator | 2025-06-22 19:56:31.757703 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-22 19:56:31.757710 | orchestrator | Sunday 22 June 2025 19:55:16 +0000 (0:00:01.590) 0:04:48.056 *********** 2025-06-22 19:56:31.757719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 19:56:31.757758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:56:31.757768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.757777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.757785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:56:31.757794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 19:56:31.757802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:56:31.757810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.757830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.757862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:56:31.757872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 19:56:31.757880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:56:31.757889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.757897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.757905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:56:31.757926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 19:56:31.757936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:56:31.757944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.757953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 19:56:31.757961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.757975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:56:31.757993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:56:31.758001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:56:31.758051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 19:56:31.758068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:56:31.758082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:56:31.758108 | orchestrator | 2025-06-22 19:56:31.758116 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-22 19:56:31.758124 | orchestrator | Sunday 22 June 2025 19:55:20 +0000 (0:00:03.843) 0:04:51.899 *********** 2025-06-22 19:56:31.758133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 19:56:31.758141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:56:31.758154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:56:31.758188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 19:56:31.758198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:56:31.758206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:56:31.758242 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.758255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 19:56:31.758263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:56:31.758272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:56:31.758302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 19:56:31.758338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:56:31.758347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:56:31.758377 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.758385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 19:56:31.758394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:56:31.758416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:56:31.758456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 19:56:31.758472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:56:31.758482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:56:31.758507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:56:31.758515 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.758524 | orchestrator | 2025-06-22 19:56:31.758532 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-22 19:56:31.758540 | orchestrator | Sunday 22 June 2025 19:55:21 +0000 (0:00:01.065) 0:04:52.964 *********** 2025-06-22 19:56:31.758548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-22 19:56:31.758556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-22 19:56:31.758565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:56:31.758574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:56:31.758588 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.758596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-22 19:56:31.758604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-22 19:56:31.758612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:56:31.758621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:56:31.758629 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.758637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-22 19:56:31.758645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-22 19:56:31.758653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:56:31.758662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:56:31.758670 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.758678 | orchestrator | 2025-06-22 19:56:31.758686 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-22 19:56:31.758694 | orchestrator | Sunday 22 June 2025 19:55:21 +0000 (0:00:00.881) 0:04:53.846 *********** 2025-06-22 19:56:31.758702 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.758710 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.758718 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.758725 | orchestrator | 2025-06-22 19:56:31.758733 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-22 19:56:31.758741 | orchestrator | Sunday 22 June 2025 19:55:22 +0000 (0:00:00.390) 0:04:54.236 *********** 2025-06-22 19:56:31.758753 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.758761 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.758769 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.758777 | orchestrator | 2025-06-22 19:56:31.758785 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-22 19:56:31.758793 | orchestrator | Sunday 22 June 2025 19:55:23 +0000 (0:00:01.457) 0:04:55.693 *********** 2025-06-22 19:56:31.758801 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.758808 | orchestrator | 2025-06-22 19:56:31.758816 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-22 19:56:31.758824 | orchestrator | Sunday 22 June 2025 19:55:25 +0000 (0:00:02.007) 0:04:57.701 *********** 2025-06-22 19:56:31.758833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:56:31.758847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:56:31.758898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:56:31.758915 | orchestrator | 2025-06-22 19:56:31.758923 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-22 19:56:31.758931 | orchestrator | Sunday 22 June 2025 19:55:28 +0000 (0:00:02.668) 0:05:00.370 *********** 2025-06-22 19:56:31.758949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-22 19:56:31.758964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-22 19:56:31.758973 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.758981 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.758989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-22 19:56:31.758998 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.759006 | orchestrator | 2025-06-22 19:56:31.759013 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-22 19:56:31.759021 | orchestrator | Sunday 22 June 2025 19:55:28 +0000 (0:00:00.420) 0:05:00.790 *********** 2025-06-22 19:56:31.759029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-22 19:56:31.759037 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.759045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-22 19:56:31.759053 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.759061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-22 19:56:31.759069 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.759080 | orchestrator | 2025-06-22 19:56:31.759088 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-22 19:56:31.759095 | orchestrator | Sunday 22 June 2025 19:55:29 +0000 (0:00:01.057) 0:05:01.848 *********** 2025-06-22 19:56:31.759103 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.759111 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.759119 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.759126 | orchestrator | 2025-06-22 19:56:31.759137 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-22 19:56:31.759146 | orchestrator | Sunday 22 June 2025 19:55:30 +0000 (0:00:00.556) 0:05:02.404 *********** 2025-06-22 19:56:31.759158 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.759166 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.759174 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.759181 | orchestrator | 2025-06-22 19:56:31.759189 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-22 19:56:31.759201 | orchestrator | Sunday 22 June 2025 19:55:32 +0000 (0:00:01.496) 0:05:03.901 *********** 2025-06-22 19:56:31.759209 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:31.759217 | orchestrator | 2025-06-22 19:56:31.759225 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-22 19:56:31.759233 | orchestrator | Sunday 22 June 2025 19:55:33 +0000 (0:00:01.798) 0:05:05.700 *********** 2025-06-22 19:56:31.759241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.759250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.759259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.759271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.759289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.759298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-22 19:56:31.759346 | orchestrator | 2025-06-22 19:56:31.759357 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-22 19:56:31.759365 | orchestrator | Sunday 22 June 2025 19:55:39 +0000 (0:00:05.708) 0:05:11.408 *********** 2025-06-22 19:56:31.759373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.759386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.759400 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.759413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.759422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.759430 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.759438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.759446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-22 19:56:31.759460 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.759468 | orchestrator | 2025-06-22 19:56:31.759476 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-22 19:56:31.759487 | orchestrator | Sunday 22 June 2025 19:55:40 +0000 (0:00:00.627) 0:05:12.035 *********** 2025-06-22 19:56:31.759496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:56:31.759508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:56:31.759517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:56:31.759525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:56:31.759533 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.759541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:56:31.759549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:56:31.759557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:56:31.759565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:56:31.759573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:56:31.759581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:56:31.759589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:56:31.759597 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.759605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:56:31.759618 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.759626 | orchestrator | 2025-06-22 19:56:31.759633 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-22 19:56:31.759642 | orchestrator | Sunday 22 June 2025 19:55:41 +0000 (0:00:01.353) 0:05:13.389 *********** 2025-06-22 19:56:31.759650 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.759657 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.759665 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.759673 | orchestrator | 2025-06-22 19:56:31.759681 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-22 19:56:31.759688 | orchestrator | Sunday 22 June 2025 19:55:42 +0000 (0:00:01.195) 0:05:14.584 *********** 2025-06-22 19:56:31.759696 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.759704 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.759712 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.759719 | orchestrator | 2025-06-22 19:56:31.759727 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-22 19:56:31.759736 | orchestrator | Sunday 22 June 2025 19:55:45 +0000 (0:00:02.452) 0:05:17.037 *********** 2025-06-22 19:56:31.759743 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.759751 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.759759 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.759767 | orchestrator | 2025-06-22 19:56:31.759775 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-22 19:56:31.759783 | orchestrator | Sunday 22 June 2025 19:55:45 +0000 (0:00:00.371) 0:05:17.408 *********** 2025-06-22 19:56:31.759790 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.759798 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.759806 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.759814 | orchestrator | 2025-06-22 19:56:31.759822 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-22 19:56:31.759833 | orchestrator | Sunday 22 June 2025 19:55:46 +0000 (0:00:00.849) 0:05:18.257 *********** 2025-06-22 19:56:31.759839 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.759846 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.759853 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.759859 | orchestrator | 2025-06-22 19:56:31.759866 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-22 19:56:31.759873 | orchestrator | Sunday 22 June 2025 19:55:46 +0000 (0:00:00.363) 0:05:18.621 *********** 2025-06-22 19:56:31.759883 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.759890 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.759896 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.759903 | orchestrator | 2025-06-22 19:56:31.759910 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-22 19:56:31.759917 | orchestrator | Sunday 22 June 2025 19:55:47 +0000 (0:00:00.331) 0:05:18.952 *********** 2025-06-22 19:56:31.759924 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.759930 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.759937 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.759943 | orchestrator | 2025-06-22 19:56:31.759950 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-22 19:56:31.759957 | orchestrator | Sunday 22 June 2025 19:55:47 +0000 (0:00:00.330) 0:05:19.283 *********** 2025-06-22 19:56:31.759964 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.759971 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.759978 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.759985 | orchestrator | 2025-06-22 19:56:31.759991 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-22 19:56:31.759998 | orchestrator | Sunday 22 June 2025 19:55:48 +0000 (0:00:00.842) 0:05:20.126 *********** 2025-06-22 19:56:31.760005 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.760012 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.760024 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.760031 | orchestrator | 2025-06-22 19:56:31.760038 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-22 19:56:31.760045 | orchestrator | Sunday 22 June 2025 19:55:48 +0000 (0:00:00.637) 0:05:20.763 *********** 2025-06-22 19:56:31.760051 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.760058 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.760065 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.760072 | orchestrator | 2025-06-22 19:56:31.760078 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-22 19:56:31.760085 | orchestrator | Sunday 22 June 2025 19:55:49 +0000 (0:00:00.342) 0:05:21.106 *********** 2025-06-22 19:56:31.760092 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.760098 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.760105 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.760111 | orchestrator | 2025-06-22 19:56:31.760118 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-22 19:56:31.760125 | orchestrator | Sunday 22 June 2025 19:55:50 +0000 (0:00:01.343) 0:05:22.450 *********** 2025-06-22 19:56:31.760131 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.760138 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.760145 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.760151 | orchestrator | 2025-06-22 19:56:31.760158 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-22 19:56:31.760165 | orchestrator | Sunday 22 June 2025 19:55:51 +0000 (0:00:00.878) 0:05:23.329 *********** 2025-06-22 19:56:31.760171 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.760178 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.760184 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.760191 | orchestrator | 2025-06-22 19:56:31.760198 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-22 19:56:31.760205 | orchestrator | Sunday 22 June 2025 19:55:52 +0000 (0:00:00.796) 0:05:24.126 *********** 2025-06-22 19:56:31.760211 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.760218 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.760225 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.760231 | orchestrator | 2025-06-22 19:56:31.760238 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-22 19:56:31.760245 | orchestrator | Sunday 22 June 2025 19:56:01 +0000 (0:00:09.480) 0:05:33.606 *********** 2025-06-22 19:56:31.760252 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.760259 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.760265 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.760272 | orchestrator | 2025-06-22 19:56:31.760278 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-22 19:56:31.760285 | orchestrator | Sunday 22 June 2025 19:56:02 +0000 (0:00:00.804) 0:05:34.410 *********** 2025-06-22 19:56:31.760292 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.760298 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.760305 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.760322 | orchestrator | 2025-06-22 19:56:31.760329 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-22 19:56:31.760336 | orchestrator | Sunday 22 June 2025 19:56:15 +0000 (0:00:12.909) 0:05:47.319 *********** 2025-06-22 19:56:31.760343 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.760350 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.760356 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.760363 | orchestrator | 2025-06-22 19:56:31.760370 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-22 19:56:31.760377 | orchestrator | Sunday 22 June 2025 19:56:16 +0000 (0:00:00.871) 0:05:48.191 *********** 2025-06-22 19:56:31.760383 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:31.760390 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:31.760397 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:31.760403 | orchestrator | 2025-06-22 19:56:31.760410 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-22 19:56:31.760424 | orchestrator | Sunday 22 June 2025 19:56:25 +0000 (0:00:09.508) 0:05:57.700 *********** 2025-06-22 19:56:31.760431 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.760437 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.760444 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.760451 | orchestrator | 2025-06-22 19:56:31.760458 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-22 19:56:31.760464 | orchestrator | Sunday 22 June 2025 19:56:26 +0000 (0:00:00.355) 0:05:58.056 *********** 2025-06-22 19:56:31.760475 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.760482 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.760489 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.760495 | orchestrator | 2025-06-22 19:56:31.760502 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-22 19:56:31.760509 | orchestrator | Sunday 22 June 2025 19:56:27 +0000 (0:00:00.931) 0:05:58.987 *********** 2025-06-22 19:56:31.760516 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.760523 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.760534 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.760541 | orchestrator | 2025-06-22 19:56:31.760547 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-22 19:56:31.760554 | orchestrator | Sunday 22 June 2025 19:56:27 +0000 (0:00:00.345) 0:05:59.333 *********** 2025-06-22 19:56:31.760561 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.760567 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.760574 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.760580 | orchestrator | 2025-06-22 19:56:31.760587 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-22 19:56:31.760594 | orchestrator | Sunday 22 June 2025 19:56:27 +0000 (0:00:00.332) 0:05:59.666 *********** 2025-06-22 19:56:31.760601 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.760607 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.760614 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.760621 | orchestrator | 2025-06-22 19:56:31.760627 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-22 19:56:31.760634 | orchestrator | Sunday 22 June 2025 19:56:28 +0000 (0:00:00.392) 0:06:00.058 *********** 2025-06-22 19:56:31.760641 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:31.760648 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:31.760654 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:31.760661 | orchestrator | 2025-06-22 19:56:31.760668 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-22 19:56:31.760674 | orchestrator | Sunday 22 June 2025 19:56:28 +0000 (0:00:00.696) 0:06:00.754 *********** 2025-06-22 19:56:31.760681 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.760688 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.760694 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.760701 | orchestrator | 2025-06-22 19:56:31.760708 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-22 19:56:31.760714 | orchestrator | Sunday 22 June 2025 19:56:29 +0000 (0:00:00.885) 0:06:01.640 *********** 2025-06-22 19:56:31.760721 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:31.760728 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:31.760734 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:31.760741 | orchestrator | 2025-06-22 19:56:31.760748 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:56:31.760754 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-22 19:56:31.760762 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-22 19:56:31.760769 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-22 19:56:31.760780 | orchestrator | 2025-06-22 19:56:31.760786 | orchestrator | 2025-06-22 19:56:31.760793 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:56:31.760800 | orchestrator | Sunday 22 June 2025 19:56:30 +0000 (0:00:00.890) 0:06:02.530 *********** 2025-06-22 19:56:31.760807 | orchestrator | =============================================================================== 2025-06-22 19:56:31.760813 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.91s 2025-06-22 19:56:31.760820 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.51s 2025-06-22 19:56:31.760827 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.48s 2025-06-22 19:56:31.760834 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.71s 2025-06-22 19:56:31.760841 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.14s 2025-06-22 19:56:31.760847 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.92s 2025-06-22 19:56:31.760854 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.78s 2025-06-22 19:56:31.760861 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.47s 2025-06-22 19:56:31.760868 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.37s 2025-06-22 19:56:31.760875 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.08s 2025-06-22 19:56:31.760881 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.05s 2025-06-22 19:56:31.760888 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.92s 2025-06-22 19:56:31.760894 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.85s 2025-06-22 19:56:31.760901 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.84s 2025-06-22 19:56:31.760908 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.78s 2025-06-22 19:56:31.760915 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.77s 2025-06-22 19:56:31.760922 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.76s 2025-06-22 19:56:31.760928 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.59s 2025-06-22 19:56:31.760935 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.47s 2025-06-22 19:56:31.760945 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.47s 2025-06-22 19:56:31.760952 | orchestrator | 2025-06-22 19:56:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:34.789737 | orchestrator | 2025-06-22 19:56:34 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:34.794856 | orchestrator | 2025-06-22 19:56:34 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:56:34.798970 | orchestrator | 2025-06-22 19:56:34 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:56:34.799018 | orchestrator | 2025-06-22 19:56:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:37.863279 | orchestrator | 2025-06-22 19:56:37 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:37.863937 | orchestrator | 2025-06-22 19:56:37 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:56:37.865291 | orchestrator | 2025-06-22 19:56:37 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:56:37.865346 | orchestrator | 2025-06-22 19:56:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:40.901604 | orchestrator | 2025-06-22 19:56:40 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:40.902737 | orchestrator | 2025-06-22 19:56:40 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:56:40.903112 | orchestrator | 2025-06-22 19:56:40 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:56:40.903483 | orchestrator | 2025-06-22 19:56:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:43.943458 | orchestrator | 2025-06-22 19:56:43 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:43.945019 | orchestrator | 2025-06-22 19:56:43 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:56:43.945807 | orchestrator | 2025-06-22 19:56:43 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:56:43.949552 | orchestrator | 2025-06-22 19:56:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:46.992727 | orchestrator | 2025-06-22 19:56:46 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:46.994418 | orchestrator | 2025-06-22 19:56:46 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:56:46.995585 | orchestrator | 2025-06-22 19:56:46 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:56:46.995616 | orchestrator | 2025-06-22 19:56:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:50.034111 | orchestrator | 2025-06-22 19:56:50 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:50.037905 | orchestrator | 2025-06-22 19:56:50 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:56:50.037943 | orchestrator | 2025-06-22 19:56:50 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:56:50.037955 | orchestrator | 2025-06-22 19:56:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:53.072931 | orchestrator | 2025-06-22 19:56:53 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:53.073549 | orchestrator | 2025-06-22 19:56:53 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:56:53.077830 | orchestrator | 2025-06-22 19:56:53 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:56:53.077854 | orchestrator | 2025-06-22 19:56:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:56.114753 | orchestrator | 2025-06-22 19:56:56 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:56.116135 | orchestrator | 2025-06-22 19:56:56 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:56:56.118443 | orchestrator | 2025-06-22 19:56:56 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:56:56.118487 | orchestrator | 2025-06-22 19:56:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:59.155992 | orchestrator | 2025-06-22 19:56:59 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:56:59.156394 | orchestrator | 2025-06-22 19:56:59 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:56:59.156975 | orchestrator | 2025-06-22 19:56:59 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:56:59.157010 | orchestrator | 2025-06-22 19:56:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:02.194383 | orchestrator | 2025-06-22 19:57:02 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:02.195706 | orchestrator | 2025-06-22 19:57:02 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:02.196274 | orchestrator | 2025-06-22 19:57:02 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:02.196500 | orchestrator | 2025-06-22 19:57:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:05.241682 | orchestrator | 2025-06-22 19:57:05 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:05.244175 | orchestrator | 2025-06-22 19:57:05 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:05.246947 | orchestrator | 2025-06-22 19:57:05 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:05.247002 | orchestrator | 2025-06-22 19:57:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:08.297663 | orchestrator | 2025-06-22 19:57:08 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:08.299952 | orchestrator | 2025-06-22 19:57:08 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:08.300761 | orchestrator | 2025-06-22 19:57:08 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:08.301075 | orchestrator | 2025-06-22 19:57:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:11.382351 | orchestrator | 2025-06-22 19:57:11 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:11.383083 | orchestrator | 2025-06-22 19:57:11 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:11.383122 | orchestrator | 2025-06-22 19:57:11 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:11.383423 | orchestrator | 2025-06-22 19:57:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:14.421857 | orchestrator | 2025-06-22 19:57:14 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:14.421952 | orchestrator | 2025-06-22 19:57:14 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:14.422817 | orchestrator | 2025-06-22 19:57:14 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:14.422921 | orchestrator | 2025-06-22 19:57:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:17.468776 | orchestrator | 2025-06-22 19:57:17 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:17.474254 | orchestrator | 2025-06-22 19:57:17 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:17.477250 | orchestrator | 2025-06-22 19:57:17 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:17.477291 | orchestrator | 2025-06-22 19:57:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:20.539489 | orchestrator | 2025-06-22 19:57:20 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:20.540448 | orchestrator | 2025-06-22 19:57:20 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:20.543689 | orchestrator | 2025-06-22 19:57:20 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:20.543717 | orchestrator | 2025-06-22 19:57:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:23.600520 | orchestrator | 2025-06-22 19:57:23 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:23.600656 | orchestrator | 2025-06-22 19:57:23 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:23.600669 | orchestrator | 2025-06-22 19:57:23 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:23.600710 | orchestrator | 2025-06-22 19:57:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:26.648181 | orchestrator | 2025-06-22 19:57:26 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:26.648794 | orchestrator | 2025-06-22 19:57:26 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:26.648855 | orchestrator | 2025-06-22 19:57:26 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:26.648869 | orchestrator | 2025-06-22 19:57:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:29.706526 | orchestrator | 2025-06-22 19:57:29 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:29.706631 | orchestrator | 2025-06-22 19:57:29 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:29.707798 | orchestrator | 2025-06-22 19:57:29 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:29.707824 | orchestrator | 2025-06-22 19:57:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:32.771827 | orchestrator | 2025-06-22 19:57:32 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:32.771937 | orchestrator | 2025-06-22 19:57:32 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:32.772493 | orchestrator | 2025-06-22 19:57:32 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:32.772524 | orchestrator | 2025-06-22 19:57:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:35.822743 | orchestrator | 2025-06-22 19:57:35 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:35.823232 | orchestrator | 2025-06-22 19:57:35 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:35.825083 | orchestrator | 2025-06-22 19:57:35 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:35.825108 | orchestrator | 2025-06-22 19:57:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:38.871589 | orchestrator | 2025-06-22 19:57:38 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:38.873977 | orchestrator | 2025-06-22 19:57:38 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:38.875805 | orchestrator | 2025-06-22 19:57:38 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:38.875831 | orchestrator | 2025-06-22 19:57:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:41.927054 | orchestrator | 2025-06-22 19:57:41 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:41.929182 | orchestrator | 2025-06-22 19:57:41 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:41.930414 | orchestrator | 2025-06-22 19:57:41 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:41.930742 | orchestrator | 2025-06-22 19:57:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:44.979104 | orchestrator | 2025-06-22 19:57:44 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:44.980739 | orchestrator | 2025-06-22 19:57:44 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:44.982462 | orchestrator | 2025-06-22 19:57:44 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:44.982559 | orchestrator | 2025-06-22 19:57:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:48.042348 | orchestrator | 2025-06-22 19:57:48 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:48.044240 | orchestrator | 2025-06-22 19:57:48 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:48.047005 | orchestrator | 2025-06-22 19:57:48 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:48.047038 | orchestrator | 2025-06-22 19:57:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:51.108352 | orchestrator | 2025-06-22 19:57:51 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:51.111339 | orchestrator | 2025-06-22 19:57:51 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:51.114726 | orchestrator | 2025-06-22 19:57:51 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:51.114802 | orchestrator | 2025-06-22 19:57:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:54.180200 | orchestrator | 2025-06-22 19:57:54 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:54.182393 | orchestrator | 2025-06-22 19:57:54 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:54.185054 | orchestrator | 2025-06-22 19:57:54 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:54.185087 | orchestrator | 2025-06-22 19:57:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:57.248267 | orchestrator | 2025-06-22 19:57:57 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:57:57.249163 | orchestrator | 2025-06-22 19:57:57 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:57:57.250836 | orchestrator | 2025-06-22 19:57:57 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:57:57.250864 | orchestrator | 2025-06-22 19:57:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:00.311009 | orchestrator | 2025-06-22 19:58:00 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:00.311121 | orchestrator | 2025-06-22 19:58:00 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:00.311263 | orchestrator | 2025-06-22 19:58:00 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:00.311321 | orchestrator | 2025-06-22 19:58:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:03.364355 | orchestrator | 2025-06-22 19:58:03 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:03.366146 | orchestrator | 2025-06-22 19:58:03 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:03.367554 | orchestrator | 2025-06-22 19:58:03 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:03.367588 | orchestrator | 2025-06-22 19:58:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:06.416084 | orchestrator | 2025-06-22 19:58:06 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:06.417968 | orchestrator | 2025-06-22 19:58:06 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:06.420298 | orchestrator | 2025-06-22 19:58:06 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:06.420328 | orchestrator | 2025-06-22 19:58:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:09.462482 | orchestrator | 2025-06-22 19:58:09 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:09.465809 | orchestrator | 2025-06-22 19:58:09 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:09.466926 | orchestrator | 2025-06-22 19:58:09 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:09.467119 | orchestrator | 2025-06-22 19:58:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:12.516145 | orchestrator | 2025-06-22 19:58:12 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:12.517510 | orchestrator | 2025-06-22 19:58:12 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:12.519445 | orchestrator | 2025-06-22 19:58:12 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:12.519476 | orchestrator | 2025-06-22 19:58:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:15.572136 | orchestrator | 2025-06-22 19:58:15 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:15.574112 | orchestrator | 2025-06-22 19:58:15 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:15.576359 | orchestrator | 2025-06-22 19:58:15 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:15.576566 | orchestrator | 2025-06-22 19:58:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:18.629308 | orchestrator | 2025-06-22 19:58:18 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:18.631093 | orchestrator | 2025-06-22 19:58:18 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:18.633245 | orchestrator | 2025-06-22 19:58:18 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:18.633310 | orchestrator | 2025-06-22 19:58:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:21.679324 | orchestrator | 2025-06-22 19:58:21 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:21.683812 | orchestrator | 2025-06-22 19:58:21 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:21.685495 | orchestrator | 2025-06-22 19:58:21 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:21.686651 | orchestrator | 2025-06-22 19:58:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:24.739809 | orchestrator | 2025-06-22 19:58:24 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:24.742782 | orchestrator | 2025-06-22 19:58:24 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:24.744317 | orchestrator | 2025-06-22 19:58:24 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:24.744405 | orchestrator | 2025-06-22 19:58:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:27.794111 | orchestrator | 2025-06-22 19:58:27 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:27.795240 | orchestrator | 2025-06-22 19:58:27 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:27.797076 | orchestrator | 2025-06-22 19:58:27 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:27.797109 | orchestrator | 2025-06-22 19:58:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:30.847446 | orchestrator | 2025-06-22 19:58:30 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:30.848087 | orchestrator | 2025-06-22 19:58:30 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:30.849714 | orchestrator | 2025-06-22 19:58:30 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:30.849810 | orchestrator | 2025-06-22 19:58:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:33.909874 | orchestrator | 2025-06-22 19:58:33 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:33.909982 | orchestrator | 2025-06-22 19:58:33 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:33.909998 | orchestrator | 2025-06-22 19:58:33 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:33.910010 | orchestrator | 2025-06-22 19:58:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:36.956579 | orchestrator | 2025-06-22 19:58:36 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:36.958934 | orchestrator | 2025-06-22 19:58:36 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:36.961302 | orchestrator | 2025-06-22 19:58:36 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:36.961337 | orchestrator | 2025-06-22 19:58:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:40.015465 | orchestrator | 2025-06-22 19:58:40 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:40.019399 | orchestrator | 2025-06-22 19:58:40 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:40.019447 | orchestrator | 2025-06-22 19:58:40 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:40.019461 | orchestrator | 2025-06-22 19:58:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:43.076478 | orchestrator | 2025-06-22 19:58:43 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:43.076578 | orchestrator | 2025-06-22 19:58:43 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:43.076593 | orchestrator | 2025-06-22 19:58:43 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:43.076605 | orchestrator | 2025-06-22 19:58:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:46.123542 | orchestrator | 2025-06-22 19:58:46 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:46.124900 | orchestrator | 2025-06-22 19:58:46 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:46.126467 | orchestrator | 2025-06-22 19:58:46 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:46.126489 | orchestrator | 2025-06-22 19:58:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:49.179403 | orchestrator | 2025-06-22 19:58:49 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:49.181676 | orchestrator | 2025-06-22 19:58:49 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:49.183833 | orchestrator | 2025-06-22 19:58:49 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:49.183883 | orchestrator | 2025-06-22 19:58:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:52.244651 | orchestrator | 2025-06-22 19:58:52 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:52.246631 | orchestrator | 2025-06-22 19:58:52 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:52.248335 | orchestrator | 2025-06-22 19:58:52 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:52.248387 | orchestrator | 2025-06-22 19:58:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:55.300540 | orchestrator | 2025-06-22 19:58:55 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:55.304196 | orchestrator | 2025-06-22 19:58:55 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:55.306663 | orchestrator | 2025-06-22 19:58:55 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:55.306701 | orchestrator | 2025-06-22 19:58:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:58.366516 | orchestrator | 2025-06-22 19:58:58 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:58:58.369476 | orchestrator | 2025-06-22 19:58:58 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:58:58.371861 | orchestrator | 2025-06-22 19:58:58 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:58:58.371879 | orchestrator | 2025-06-22 19:58:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:01.422325 | orchestrator | 2025-06-22 19:59:01 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state STARTED 2025-06-22 19:59:01.422573 | orchestrator | 2025-06-22 19:59:01 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:01.423553 | orchestrator | 2025-06-22 19:59:01 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:59:01.423588 | orchestrator | 2025-06-22 19:59:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:04.492691 | orchestrator | 2025-06-22 19:59:04 | INFO  | Task b549db4e-826f-4d91-8ced-61396a48bf3b is in state SUCCESS 2025-06-22 19:59:04.495442 | orchestrator | 2025-06-22 19:59:04.495492 | orchestrator | 2025-06-22 19:59:04.495504 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-22 19:59:04.495517 | orchestrator | 2025-06-22 19:59:04.495528 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-22 19:59:04.495540 | orchestrator | Sunday 22 June 2025 19:47:44 +0000 (0:00:00.765) 0:00:00.765 *********** 2025-06-22 19:59:04.495553 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.495565 | orchestrator | 2025-06-22 19:59:04.495576 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-22 19:59:04.495587 | orchestrator | Sunday 22 June 2025 19:47:45 +0000 (0:00:01.029) 0:00:01.794 *********** 2025-06-22 19:59:04.495598 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.495628 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.495639 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.495650 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.495661 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.495672 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.495784 | orchestrator | 2025-06-22 19:59:04.495820 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-22 19:59:04.495832 | orchestrator | Sunday 22 June 2025 19:47:47 +0000 (0:00:01.536) 0:00:03.331 *********** 2025-06-22 19:59:04.495843 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.495878 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.495889 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.495900 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.495911 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.495921 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.495959 | orchestrator | 2025-06-22 19:59:04.495970 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-22 19:59:04.495981 | orchestrator | Sunday 22 June 2025 19:47:48 +0000 (0:00:01.066) 0:00:04.398 *********** 2025-06-22 19:59:04.496093 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.496105 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.496115 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.496126 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.496137 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.496148 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.496158 | orchestrator | 2025-06-22 19:59:04.496170 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-22 19:59:04.496181 | orchestrator | Sunday 22 June 2025 19:47:49 +0000 (0:00:01.187) 0:00:05.585 *********** 2025-06-22 19:59:04.496207 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.496236 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.496248 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.496259 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.496270 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.496281 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.496291 | orchestrator | 2025-06-22 19:59:04.496302 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-22 19:59:04.496369 | orchestrator | Sunday 22 June 2025 19:47:50 +0000 (0:00:00.937) 0:00:06.523 *********** 2025-06-22 19:59:04.496382 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.496419 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.496431 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.496442 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.496453 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.496464 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.496474 | orchestrator | 2025-06-22 19:59:04.496485 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-22 19:59:04.496496 | orchestrator | Sunday 22 June 2025 19:47:51 +0000 (0:00:00.789) 0:00:07.312 *********** 2025-06-22 19:59:04.496507 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.496518 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.496595 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.496620 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.496631 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.496642 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.496653 | orchestrator | 2025-06-22 19:59:04.496664 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-22 19:59:04.496675 | orchestrator | Sunday 22 June 2025 19:47:52 +0000 (0:00:01.400) 0:00:08.712 *********** 2025-06-22 19:59:04.496686 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.496816 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.496828 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.496839 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.496850 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.496861 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.496871 | orchestrator | 2025-06-22 19:59:04.496882 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-22 19:59:04.496893 | orchestrator | Sunday 22 June 2025 19:47:53 +0000 (0:00:00.995) 0:00:09.708 *********** 2025-06-22 19:59:04.496904 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.496914 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.496926 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.496950 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.496962 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.496972 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.496983 | orchestrator | 2025-06-22 19:59:04.496994 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-22 19:59:04.497019 | orchestrator | Sunday 22 June 2025 19:47:54 +0000 (0:00:01.114) 0:00:10.823 *********** 2025-06-22 19:59:04.497030 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 19:59:04.497051 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 19:59:04.497062 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 19:59:04.497073 | orchestrator | 2025-06-22 19:59:04.497084 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-22 19:59:04.497094 | orchestrator | Sunday 22 June 2025 19:47:55 +0000 (0:00:00.799) 0:00:11.622 *********** 2025-06-22 19:59:04.497105 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.497116 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.497127 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.497138 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.497251 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.497265 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.497275 | orchestrator | 2025-06-22 19:59:04.497301 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-22 19:59:04.497313 | orchestrator | Sunday 22 June 2025 19:47:56 +0000 (0:00:01.158) 0:00:12.781 *********** 2025-06-22 19:59:04.497324 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 19:59:04.497335 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 19:59:04.497346 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 19:59:04.497357 | orchestrator | 2025-06-22 19:59:04.497368 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-22 19:59:04.497379 | orchestrator | Sunday 22 June 2025 19:48:00 +0000 (0:00:03.294) 0:00:16.076 *********** 2025-06-22 19:59:04.497389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 19:59:04.497400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 19:59:04.497411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 19:59:04.497422 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.497433 | orchestrator | 2025-06-22 19:59:04.497444 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-22 19:59:04.497454 | orchestrator | Sunday 22 June 2025 19:48:00 +0000 (0:00:00.641) 0:00:16.717 *********** 2025-06-22 19:59:04.497468 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.497605 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.497617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.497629 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.497640 | orchestrator | 2025-06-22 19:59:04.497651 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-22 19:59:04.497662 | orchestrator | Sunday 22 June 2025 19:48:01 +0000 (0:00:01.028) 0:00:17.746 *********** 2025-06-22 19:59:04.497682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.497696 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.497716 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.497815 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.497828 | orchestrator | 2025-06-22 19:59:04.497839 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-22 19:59:04.497850 | orchestrator | Sunday 22 June 2025 19:48:02 +0000 (0:00:00.724) 0:00:18.471 *********** 2025-06-22 19:59:04.497945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-22 19:47:57.442038', 'end': '2025-06-22 19:47:57.690834', 'delta': '0:00:00.248796', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.497987 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-22 19:47:58.697231', 'end': '2025-06-22 19:47:58.909368', 'delta': '0:00:00.212137', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.498001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-22 19:47:59.642212', 'end': '2025-06-22 19:47:59.875515', 'delta': '0:00:00.233303', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.498012 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.498073 | orchestrator | 2025-06-22 19:59:04.498084 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-22 19:59:04.498166 | orchestrator | Sunday 22 June 2025 19:48:02 +0000 (0:00:00.263) 0:00:18.734 *********** 2025-06-22 19:59:04.498177 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.498188 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.498199 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.498210 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.498382 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.498413 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.498425 | orchestrator | 2025-06-22 19:59:04.498436 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-22 19:59:04.498465 | orchestrator | Sunday 22 June 2025 19:48:04 +0000 (0:00:01.673) 0:00:20.407 *********** 2025-06-22 19:59:04.498477 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 19:59:04.498488 | orchestrator | 2025-06-22 19:59:04.498499 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-22 19:59:04.498509 | orchestrator | Sunday 22 June 2025 19:48:05 +0000 (0:00:00.768) 0:00:21.175 *********** 2025-06-22 19:59:04.498520 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.498531 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.498542 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.498552 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.498563 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.498574 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.498585 | orchestrator | 2025-06-22 19:59:04.498595 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-22 19:59:04.498606 | orchestrator | Sunday 22 June 2025 19:48:06 +0000 (0:00:01.410) 0:00:22.586 *********** 2025-06-22 19:59:04.498617 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.498627 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.498638 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.498649 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.498659 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.498689 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.498701 | orchestrator | 2025-06-22 19:59:04.498711 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 19:59:04.498722 | orchestrator | Sunday 22 June 2025 19:48:08 +0000 (0:00:01.776) 0:00:24.362 *********** 2025-06-22 19:59:04.498733 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.498743 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.498754 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.498765 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.498776 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.498896 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.498932 | orchestrator | 2025-06-22 19:59:04.498944 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-22 19:59:04.498954 | orchestrator | Sunday 22 June 2025 19:48:09 +0000 (0:00:01.172) 0:00:25.535 *********** 2025-06-22 19:59:04.498965 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.498976 | orchestrator | 2025-06-22 19:59:04.498987 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-22 19:59:04.498998 | orchestrator | Sunday 22 June 2025 19:48:09 +0000 (0:00:00.183) 0:00:25.718 *********** 2025-06-22 19:59:04.499075 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.499087 | orchestrator | 2025-06-22 19:59:04.499098 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 19:59:04.499108 | orchestrator | Sunday 22 June 2025 19:48:10 +0000 (0:00:00.426) 0:00:26.145 *********** 2025-06-22 19:59:04.499119 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.499130 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.499141 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.499151 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.499162 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.499173 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.499208 | orchestrator | 2025-06-22 19:59:04.499248 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-22 19:59:04.499260 | orchestrator | Sunday 22 June 2025 19:48:11 +0000 (0:00:00.940) 0:00:27.086 *********** 2025-06-22 19:59:04.499271 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.499282 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.499292 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.499303 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.499314 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.499334 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.499345 | orchestrator | 2025-06-22 19:59:04.499355 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-22 19:59:04.499366 | orchestrator | Sunday 22 June 2025 19:48:12 +0000 (0:00:01.297) 0:00:28.383 *********** 2025-06-22 19:59:04.499377 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.499388 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.499399 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.499409 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.499420 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.499445 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.499457 | orchestrator | 2025-06-22 19:59:04.499468 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-22 19:59:04.499479 | orchestrator | Sunday 22 June 2025 19:48:13 +0000 (0:00:00.790) 0:00:29.173 *********** 2025-06-22 19:59:04.499490 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.499501 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.499537 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.499550 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.499561 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.499572 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.499582 | orchestrator | 2025-06-22 19:59:04.499593 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-22 19:59:04.499604 | orchestrator | Sunday 22 June 2025 19:48:14 +0000 (0:00:00.950) 0:00:30.124 *********** 2025-06-22 19:59:04.499615 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.499626 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.499637 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.499647 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.499669 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.499681 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.499692 | orchestrator | 2025-06-22 19:59:04.499703 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-22 19:59:04.499714 | orchestrator | Sunday 22 June 2025 19:48:14 +0000 (0:00:00.796) 0:00:30.921 *********** 2025-06-22 19:59:04.499725 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.499736 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.499776 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.499788 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.499799 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.499810 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.499820 | orchestrator | 2025-06-22 19:59:04.499838 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-22 19:59:04.499849 | orchestrator | Sunday 22 June 2025 19:48:15 +0000 (0:00:01.042) 0:00:31.963 *********** 2025-06-22 19:59:04.499860 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.499871 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.499882 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.499893 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.499903 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.499914 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.499925 | orchestrator | 2025-06-22 19:59:04.499936 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-22 19:59:04.499947 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:01.050) 0:00:33.014 *********** 2025-06-22 19:59:04.499959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9f4df137--04dd--5f0e--acd7--f62ec38375b4-osd--block--9f4df137--04dd--5f0e--acd7--f62ec38375b4', 'dm-uuid-LVM-oxvSQqk8CZ0BFrSC8d4e0rP8csYAErcry6XISNtmUrICsxJFjc2IQUiMGa7kUKiZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.499981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5c0aa592--9340--5775--8ceb--7aef1759a79b-osd--block--5c0aa592--9340--5775--8ceb--7aef1759a79b', 'dm-uuid-LVM-OcROyVxQoe0QyuSJnJEBbfK3G7Cr6aiJ8AkjXA4FsdJp8J9PUEQNtc2h0J3H8MYK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b7d3102c--a914--5a7b--b709--ad20b0d5984a-osd--block--b7d3102c--a914--5a7b--b709--ad20b0d5984a', 'dm-uuid-LVM-dqIZv4Ex6RJTpbtoxv36SxSdFHaLpNcfAdi3Iehhbv218Fm5SFYyLg2ZlD4VrsKj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c557b89--2e3b--5795--aff3--9e4ccad52f24-osd--block--0c557b89--2e3b--5795--aff3--9e4ccad52f24', 'dm-uuid-LVM-oIExWIXCm0QAKVc3a25VzudhAF6eHVer2zFseVPSsqNIMqp9EdN9EH1MctfYsl6J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part1', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part14', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part15', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part16', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9f4df137--04dd--5f0e--acd7--f62ec38375b4-osd--block--9f4df137--04dd--5f0e--acd7--f62ec38375b4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-USmStc-YabB-na20-s4fV-wHCS-qr0s-vI18Xt', 'scsi-0QEMU_QEMU_HARDDISK_e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f', 'scsi-SQEMU_QEMU_HARDDISK_e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5c0aa592--9340--5775--8ceb--7aef1759a79b-osd--block--5c0aa592--9340--5775--8ceb--7aef1759a79b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3DO5KQ-d07a-0vOC-ST5j-Ufhw-ysA8-DXWSNk', 'scsi-0QEMU_QEMU_HARDDISK_f3f79088-b1f4-4694-a8b0-38e1aef3e3c0', 'scsi-SQEMU_QEMU_HARDDISK_f3f79088-b1f4-4694-a8b0-38e1aef3e3c0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d', 'scsi-SQEMU_QEMU_HARDDISK_26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500348 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.500379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part1', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part14', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part15', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part16', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b7d3102c--a914--5a7b--b709--ad20b0d5984a-osd--block--b7d3102c--a914--5a7b--b709--ad20b0d5984a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RI6tee-Ctsq-b82Y-vhAs-qILk-onm9-30qwmc', 'scsi-0QEMU_QEMU_HARDDISK_a6d93ccb-5091-4fbc-bc32-8344f81d146e', 'scsi-SQEMU_QEMU_HARDDISK_a6d93ccb-5091-4fbc-bc32-8344f81d146e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0c557b89--2e3b--5795--aff3--9e4ccad52f24-osd--block--0c557b89--2e3b--5795--aff3--9e4ccad52f24'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5mbq4L-R3Q8-28jF-ju5S-NFdk-eNqv-9DpIch', 'scsi-0QEMU_QEMU_HARDDISK_4258f07d-32b4-4c40-a297-43ff401da985', 'scsi-SQEMU_QEMU_HARDDISK_4258f07d-32b4-4c40-a297-43ff401da985'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_626bb53b-03fa-4cf2-9c74-01c88e74436c', 'scsi-SQEMU_QEMU_HARDDISK_626bb53b-03fa-4cf2-9c74-01c88e74436c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--26b627d5--c9a2--5c9e--a2df--a450422a30c2-osd--block--26b627d5--c9a2--5c9e--a2df--a450422a30c2', 'dm-uuid-LVM-ruDhml2Uk5M5Hs7Cy5u1ZjJTjM3z7gZWOhMMz3cLdeNfWFfiH7KyrjJgLl3OifH3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f64325fb--298e--5c24--b96e--fd5d866c56eb-osd--block--f64325fb--298e--5c24--b96e--fd5d866c56eb', 'dm-uuid-LVM-tgAuwQAE4RGK4uNkwQErpXJATvGxkfeGYsHnw3q9hUumzerdBk3iymo0hraEGQ0o'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500640 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.500658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part1', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part14', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part15', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part16', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--26b627d5--c9a2--5c9e--a2df--a450422a30c2-osd--block--26b627d5--c9a2--5c9e--a2df--a450422a30c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9hXsyq-bQW6-HAdc-GqEn-cEDn-KEnj-P18Wfe', 'scsi-0QEMU_QEMU_HARDDISK_9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b', 'scsi-SQEMU_QEMU_HARDDISK_9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f64325fb--298e--5c24--b96e--fd5d866c56eb-osd--block--f64325fb--298e--5c24--b96e--fd5d866c56eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gtPKO0-Hy2x-8HeF-yiH2-0AlN-kFRW-3l0tKg', 'scsi-0QEMU_QEMU_HARDDISK_d64e86dd-c29d-4edc-bf55-6282aedab238', 'scsi-SQEMU_QEMU_HARDDISK_d64e86dd-c29d-4edc-bf55-6282aedab238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a34530e6-164e-4284-ba94-1682f51170e6', 'scsi-SQEMU_QEMU_HARDDISK_a34530e6-164e-4284-ba94-1682f51170e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52', 'scsi-SQEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part1', 'scsi-SQEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part14', 'scsi-SQEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part15', 'scsi-SQEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part16', 'scsi-SQEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-07-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.500884 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.500895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500963 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.500974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.500997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.501008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.501025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.501037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.501048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.501074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe', 'scsi-SQEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.501087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.501104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.501116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.501128 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.501139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.501157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 19:59:04.501173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f', 'scsi-SQEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part1', 'scsi-SQEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part14', 'scsi-SQEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part15', 'scsi-SQEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part16', 'scsi-SQEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.501193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 19:59:04.501205 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.501271 | orchestrator | 2025-06-22 19:59:04.501286 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-22 19:59:04.501298 | orchestrator | Sunday 22 June 2025 19:48:19 +0000 (0:00:02.315) 0:00:35.329 *********** 2025-06-22 19:59:04.501310 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9f4df137--04dd--5f0e--acd7--f62ec38375b4-osd--block--9f4df137--04dd--5f0e--acd7--f62ec38375b4', 'dm-uuid-LVM-oxvSQqk8CZ0BFrSC8d4e0rP8csYAErcry6XISNtmUrICsxJFjc2IQUiMGa7kUKiZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5c0aa592--9340--5775--8ceb--7aef1759a79b-osd--block--5c0aa592--9340--5775--8ceb--7aef1759a79b', 'dm-uuid-LVM-OcROyVxQoe0QyuSJnJEBbfK3G7Cr6aiJ8AkjXA4FsdJp8J9PUEQNtc2h0J3H8MYK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501371 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501418 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501435 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b7d3102c--a914--5a7b--b709--ad20b0d5984a-osd--block--b7d3102c--a914--5a7b--b709--ad20b0d5984a', 'dm-uuid-LVM-dqIZv4Ex6RJTpbtoxv36SxSdFHaLpNcfAdi3Iehhbv218Fm5SFYyLg2ZlD4VrsKj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501446 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501458 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c557b89--2e3b--5795--aff3--9e4ccad52f24-osd--block--0c557b89--2e3b--5795--aff3--9e4ccad52f24', 'dm-uuid-LVM-oIExWIXCm0QAKVc3a25VzudhAF6eHVer2zFseVPSsqNIMqp9EdN9EH1MctfYsl6J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part1', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part14', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part15', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part16', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501528 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9f4df137--04dd--5f0e--acd7--f62ec38375b4-osd--block--9f4df137--04dd--5f0e--acd7--f62ec38375b4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-USmStc-YabB-na20-s4fV-wHCS-qr0s-vI18Xt', 'scsi-0QEMU_QEMU_HARDDISK_e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f', 'scsi-SQEMU_QEMU_HARDDISK_e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501582 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5c0aa592--9340--5775--8ceb--7aef1759a79b-osd--block--5c0aa592--9340--5775--8ceb--7aef1759a79b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3DO5KQ-d07a-0vOC-ST5j-Ufhw-ysA8-DXWSNk', 'scsi-0QEMU_QEMU_HARDDISK_f3f79088-b1f4-4694-a8b0-38e1aef3e3c0', 'scsi-SQEMU_QEMU_HARDDISK_f3f79088-b1f4-4694-a8b0-38e1aef3e3c0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501652 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d', 'scsi-SQEMU_QEMU_HARDDISK_26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501671 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501702 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501718 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501741 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--26b627d5--c9a2--5c9e--a2df--a450422a30c2-osd--block--26b627d5--c9a2--5c9e--a2df--a450422a30c2', 'dm-uuid-LVM-ruDhml2Uk5M5Hs7Cy5u1ZjJTjM3z7gZWOhMMz3cLdeNfWFfiH7KyrjJgLl3OifH3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501762 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part1', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part14', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part15', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part16', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b7d3102c--a914--5a7b--b709--ad20b0d5984a-osd--block--b7d3102c--a914--5a7b--b709--ad20b0d5984a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RI6tee-Ctsq-b82Y-vhAs-qILk-onm9-30qwmc', 'scsi-0QEMU_QEMU_HARDDISK_a6d93ccb-5091-4fbc-bc32-8344f81d146e', 'scsi-SQEMU_QEMU_HARDDISK_a6d93ccb-5091-4fbc-bc32-8344f81d146e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501795 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0c557b89--2e3b--5795--aff3--9e4ccad52f24-osd--block--0c557b89--2e3b--5795--aff3--9e4ccad52f24'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5mbq4L-R3Q8-28jF-ju5S-NFdk-eNqv-9DpIch', 'scsi-0QEMU_QEMU_HARDDISK_4258f07d-32b4-4c40-a297-43ff401da985', 'scsi-SQEMU_QEMU_HARDDISK_4258f07d-32b4-4c40-a297-43ff401da985'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501893 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f64325fb--298e--5c24--b96e--fd5d866c56eb-osd--block--f64325fb--298e--5c24--b96e--fd5d866c56eb', 'dm-uuid-LVM-tgAuwQAE4RGK4uNkwQErpXJATvGxkfeGYsHnw3q9hUumzerdBk3iymo0hraEGQ0o'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501923 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_626bb53b-03fa-4cf2-9c74-01c88e74436c', 'scsi-SQEMU_QEMU_HARDDISK_626bb53b-03fa-4cf2-9c74-01c88e74436c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501934 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501944 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.501959 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501969 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.501979 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502003 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502071 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502085 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502101 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502112 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502122 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502132 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502180 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502191 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502201 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502216 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502312 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502329 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502366 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part1', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part14', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part15', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part16', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502385 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52', 'scsi-SQEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part1', 'scsi-SQEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part14', 'scsi-SQEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part15', 'scsi-SQEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part16', 'scsi-SQEMU_QEMU_HARDDISK_ebef9530-476f-4d45-9413-6d9a7f459b52-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502418 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--26b627d5--c9a2--5c9e--a2df--a450422a30c2-osd--block--26b627d5--c9a2--5c9e--a2df--a450422a30c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9hXsyq-bQW6-HAdc-GqEn-cEDn-KEnj-P18Wfe', 'scsi-0QEMU_QEMU_HARDDISK_9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b', 'scsi-SQEMU_QEMU_HARDDISK_9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502445 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-07-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502463 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.502486 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f64325fb--298e--5c24--b96e--fd5d866c56eb-osd--block--f64325fb--298e--5c24--b96e--fd5d866c56eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gtPKO0-Hy2x-8HeF-yiH2-0AlN-kFRW-3l0tKg', 'scsi-0QEMU_QEMU_HARDDISK_d64e86dd-c29d-4edc-bf55-6282aedab238', 'scsi-SQEMU_QEMU_HARDDISK_d64e86dd-c29d-4edc-bf55-6282aedab238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502503 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a34530e6-164e-4284-ba94-1682f51170e6', 'scsi-SQEMU_QEMU_HARDDISK_a34530e6-164e-4284-ba94-1682f51170e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502539 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502558 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502575 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502589 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502605 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502614 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502627 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502642 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502651 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502668 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe', 'scsi-SQEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f30cd8c-78a6-441b-83bd-3e59c68043fe-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502682 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.502690 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502699 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.502707 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.502720 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502729 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502737 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502749 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502758 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502771 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502784 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502793 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502806 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f', 'scsi-SQEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part1', 'scsi-SQEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part14', 'scsi-SQEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part15', 'scsi-SQEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part16', 'scsi-SQEMU_QEMU_HARDDISK_45370c48-ef13-40f4-9d83-898af248b31f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502820 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 19:59:04.502828 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.502836 | orchestrator | 2025-06-22 19:59:04.502845 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-22 19:59:04.502853 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:01.623) 0:00:36.953 *********** 2025-06-22 19:59:04.502866 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.502874 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.502882 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.502890 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.502898 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.502905 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.502913 | orchestrator | 2025-06-22 19:59:04.502921 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-22 19:59:04.502929 | orchestrator | Sunday 22 June 2025 19:48:22 +0000 (0:00:01.507) 0:00:38.461 *********** 2025-06-22 19:59:04.502937 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.502945 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.502953 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.502961 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.502969 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.502977 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.502985 | orchestrator | 2025-06-22 19:59:04.502992 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 19:59:04.503000 | orchestrator | Sunday 22 June 2025 19:48:23 +0000 (0:00:00.601) 0:00:39.062 *********** 2025-06-22 19:59:04.503008 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.503016 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.503024 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.503032 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.503040 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.503048 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.503056 | orchestrator | 2025-06-22 19:59:04.503064 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 19:59:04.503072 | orchestrator | Sunday 22 June 2025 19:48:23 +0000 (0:00:00.615) 0:00:39.678 *********** 2025-06-22 19:59:04.503080 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.503087 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.503095 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.503103 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.503111 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.503123 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.503131 | orchestrator | 2025-06-22 19:59:04.503139 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 19:59:04.503147 | orchestrator | Sunday 22 June 2025 19:48:24 +0000 (0:00:00.583) 0:00:40.261 *********** 2025-06-22 19:59:04.503155 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.503162 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.503170 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.503178 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.503186 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.503194 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.503201 | orchestrator | 2025-06-22 19:59:04.503209 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 19:59:04.503239 | orchestrator | Sunday 22 June 2025 19:48:25 +0000 (0:00:01.277) 0:00:41.539 *********** 2025-06-22 19:59:04.503249 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.503257 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.503269 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.503277 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.503285 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.503293 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.503300 | orchestrator | 2025-06-22 19:59:04.503308 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-22 19:59:04.503316 | orchestrator | Sunday 22 June 2025 19:48:26 +0000 (0:00:00.779) 0:00:42.319 *********** 2025-06-22 19:59:04.503324 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-22 19:59:04.503332 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-22 19:59:04.503340 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-22 19:59:04.503348 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-22 19:59:04.503356 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-22 19:59:04.503364 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-22 19:59:04.503372 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 19:59:04.503379 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-22 19:59:04.503387 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-22 19:59:04.503395 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-22 19:59:04.503403 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-22 19:59:04.503410 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-22 19:59:04.503418 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-22 19:59:04.503426 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-22 19:59:04.503434 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-22 19:59:04.503442 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-22 19:59:04.503449 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-22 19:59:04.503457 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-22 19:59:04.503465 | orchestrator | 2025-06-22 19:59:04.503473 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-22 19:59:04.503481 | orchestrator | Sunday 22 June 2025 19:48:29 +0000 (0:00:03.005) 0:00:45.324 *********** 2025-06-22 19:59:04.503489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 19:59:04.503496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 19:59:04.503504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 19:59:04.503512 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.503520 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-22 19:59:04.503528 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-22 19:59:04.503535 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-22 19:59:04.503543 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.503556 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-22 19:59:04.503564 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-22 19:59:04.503577 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-22 19:59:04.503585 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.503593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 19:59:04.503601 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 19:59:04.503609 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 19:59:04.503617 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.503625 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-22 19:59:04.503632 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-22 19:59:04.503640 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-22 19:59:04.503648 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.503656 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-22 19:59:04.503664 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-22 19:59:04.503672 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-22 19:59:04.503680 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.503687 | orchestrator | 2025-06-22 19:59:04.503695 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-22 19:59:04.503703 | orchestrator | Sunday 22 June 2025 19:48:30 +0000 (0:00:00.721) 0:00:46.046 *********** 2025-06-22 19:59:04.503711 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.503719 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.503727 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.503735 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.503743 | orchestrator | 2025-06-22 19:59:04.503751 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-22 19:59:04.503760 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:01.442) 0:00:47.489 *********** 2025-06-22 19:59:04.503768 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.503776 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.503783 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.503791 | orchestrator | 2025-06-22 19:59:04.503799 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-22 19:59:04.503807 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:00.319) 0:00:47.809 *********** 2025-06-22 19:59:04.503815 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.503823 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.503831 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.503838 | orchestrator | 2025-06-22 19:59:04.503846 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-22 19:59:04.503854 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.546) 0:00:48.356 *********** 2025-06-22 19:59:04.503862 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.503874 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.503882 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.503890 | orchestrator | 2025-06-22 19:59:04.503898 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-22 19:59:04.503906 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.513) 0:00:48.869 *********** 2025-06-22 19:59:04.503914 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.503921 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.503929 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.503937 | orchestrator | 2025-06-22 19:59:04.503945 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-22 19:59:04.503953 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.694) 0:00:49.564 *********** 2025-06-22 19:59:04.503966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.503974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.503982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.503990 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.503998 | orchestrator | 2025-06-22 19:59:04.504005 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-22 19:59:04.504013 | orchestrator | Sunday 22 June 2025 19:48:34 +0000 (0:00:00.528) 0:00:50.093 *********** 2025-06-22 19:59:04.504021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.504029 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.504036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.504044 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.504052 | orchestrator | 2025-06-22 19:59:04.504060 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-22 19:59:04.504068 | orchestrator | Sunday 22 June 2025 19:48:34 +0000 (0:00:00.480) 0:00:50.573 *********** 2025-06-22 19:59:04.504076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.504083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.504091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.504099 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.504107 | orchestrator | 2025-06-22 19:59:04.504115 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-22 19:59:04.504123 | orchestrator | Sunday 22 June 2025 19:48:35 +0000 (0:00:00.803) 0:00:51.377 *********** 2025-06-22 19:59:04.504131 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.504139 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.504147 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.504154 | orchestrator | 2025-06-22 19:59:04.504162 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-22 19:59:04.504170 | orchestrator | Sunday 22 June 2025 19:48:36 +0000 (0:00:01.049) 0:00:52.426 *********** 2025-06-22 19:59:04.504178 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 19:59:04.504186 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-22 19:59:04.504194 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-22 19:59:04.504201 | orchestrator | 2025-06-22 19:59:04.504213 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-22 19:59:04.504241 | orchestrator | Sunday 22 June 2025 19:48:37 +0000 (0:00:01.214) 0:00:53.641 *********** 2025-06-22 19:59:04.504249 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 19:59:04.504257 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 19:59:04.504265 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 19:59:04.504273 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 19:59:04.504281 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 19:59:04.504289 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 19:59:04.504297 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 19:59:04.504305 | orchestrator | 2025-06-22 19:59:04.504313 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-22 19:59:04.504321 | orchestrator | Sunday 22 June 2025 19:48:38 +0000 (0:00:01.057) 0:00:54.699 *********** 2025-06-22 19:59:04.504329 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 19:59:04.504336 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 19:59:04.504344 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 19:59:04.504358 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 19:59:04.504366 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 19:59:04.504374 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 19:59:04.504382 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 19:59:04.504390 | orchestrator | 2025-06-22 19:59:04.504398 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 19:59:04.504405 | orchestrator | Sunday 22 June 2025 19:48:40 +0000 (0:00:02.195) 0:00:56.894 *********** 2025-06-22 19:59:04.504414 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.504422 | orchestrator | 2025-06-22 19:59:04.504430 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 19:59:04.504438 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:01.238) 0:00:58.133 *********** 2025-06-22 19:59:04.504450 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.504459 | orchestrator | 2025-06-22 19:59:04.504467 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 19:59:04.504474 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.884) 0:00:59.017 *********** 2025-06-22 19:59:04.504482 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.504490 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.504498 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.504506 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.504514 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.504522 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.504529 | orchestrator | 2025-06-22 19:59:04.504537 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 19:59:04.504545 | orchestrator | Sunday 22 June 2025 19:48:44 +0000 (0:00:01.351) 0:01:00.368 *********** 2025-06-22 19:59:04.504553 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.504561 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.504569 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.504577 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.504585 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.504593 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.504601 | orchestrator | 2025-06-22 19:59:04.504609 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 19:59:04.504617 | orchestrator | Sunday 22 June 2025 19:48:45 +0000 (0:00:01.138) 0:01:01.507 *********** 2025-06-22 19:59:04.504625 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.504633 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.504641 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.504649 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.504657 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.504664 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.504672 | orchestrator | 2025-06-22 19:59:04.504680 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 19:59:04.504688 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:01.304) 0:01:02.812 *********** 2025-06-22 19:59:04.504696 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.504704 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.504712 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.504720 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.504728 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.504736 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.504744 | orchestrator | 2025-06-22 19:59:04.504752 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 19:59:04.504760 | orchestrator | Sunday 22 June 2025 19:48:47 +0000 (0:00:00.629) 0:01:03.441 *********** 2025-06-22 19:59:04.504773 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.504781 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.504789 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.504796 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.504804 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.504812 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.504820 | orchestrator | 2025-06-22 19:59:04.504828 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 19:59:04.504840 | orchestrator | Sunday 22 June 2025 19:48:48 +0000 (0:00:01.260) 0:01:04.702 *********** 2025-06-22 19:59:04.504848 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.504856 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.504864 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.504872 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.504880 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.504887 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.504895 | orchestrator | 2025-06-22 19:59:04.504903 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 19:59:04.504911 | orchestrator | Sunday 22 June 2025 19:48:49 +0000 (0:00:00.649) 0:01:05.351 *********** 2025-06-22 19:59:04.504919 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.504926 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.504934 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.504942 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.504950 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.504957 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.504965 | orchestrator | 2025-06-22 19:59:04.504973 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 19:59:04.504981 | orchestrator | Sunday 22 June 2025 19:48:50 +0000 (0:00:01.150) 0:01:06.502 *********** 2025-06-22 19:59:04.504989 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.504996 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.505004 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.505012 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.505020 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.505028 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.505035 | orchestrator | 2025-06-22 19:59:04.505043 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 19:59:04.505051 | orchestrator | Sunday 22 June 2025 19:48:51 +0000 (0:00:01.271) 0:01:07.774 *********** 2025-06-22 19:59:04.505059 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.505067 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.505075 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.505082 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.505090 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.505098 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.505106 | orchestrator | 2025-06-22 19:59:04.505113 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 19:59:04.505121 | orchestrator | Sunday 22 June 2025 19:48:53 +0000 (0:00:01.595) 0:01:09.369 *********** 2025-06-22 19:59:04.505129 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.505137 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.505145 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.505152 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.505160 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.505168 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.505176 | orchestrator | 2025-06-22 19:59:04.505184 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 19:59:04.505191 | orchestrator | Sunday 22 June 2025 19:48:53 +0000 (0:00:00.611) 0:01:09.980 *********** 2025-06-22 19:59:04.505203 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.505211 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.505273 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.505292 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.505300 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.505308 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.505316 | orchestrator | 2025-06-22 19:59:04.505324 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 19:59:04.505332 | orchestrator | Sunday 22 June 2025 19:48:55 +0000 (0:00:01.057) 0:01:11.038 *********** 2025-06-22 19:59:04.505340 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.505348 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.505355 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.505363 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.505371 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.505378 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.505385 | orchestrator | 2025-06-22 19:59:04.505392 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 19:59:04.505399 | orchestrator | Sunday 22 June 2025 19:48:55 +0000 (0:00:00.681) 0:01:11.719 *********** 2025-06-22 19:59:04.505405 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.505412 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.505418 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.505425 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.505431 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.505438 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.505445 | orchestrator | 2025-06-22 19:59:04.505451 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 19:59:04.505458 | orchestrator | Sunday 22 June 2025 19:48:56 +0000 (0:00:00.827) 0:01:12.547 *********** 2025-06-22 19:59:04.505465 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.505471 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.505478 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.505485 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.505491 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.505498 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.505504 | orchestrator | 2025-06-22 19:59:04.505511 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 19:59:04.505518 | orchestrator | Sunday 22 June 2025 19:48:57 +0000 (0:00:00.609) 0:01:13.156 *********** 2025-06-22 19:59:04.505524 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.505531 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.505538 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.505544 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.505551 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.505557 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.505564 | orchestrator | 2025-06-22 19:59:04.505570 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 19:59:04.505577 | orchestrator | Sunday 22 June 2025 19:48:58 +0000 (0:00:00.943) 0:01:14.100 *********** 2025-06-22 19:59:04.505584 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.505590 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.505597 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.505603 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.505610 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.505617 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.505623 | orchestrator | 2025-06-22 19:59:04.505635 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 19:59:04.505642 | orchestrator | Sunday 22 June 2025 19:48:58 +0000 (0:00:00.572) 0:01:14.673 *********** 2025-06-22 19:59:04.505649 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.505655 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.505662 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.505668 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.505675 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.505682 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.505688 | orchestrator | 2025-06-22 19:59:04.505695 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 19:59:04.505707 | orchestrator | Sunday 22 June 2025 19:48:59 +0000 (0:00:00.823) 0:01:15.496 *********** 2025-06-22 19:59:04.505714 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.505721 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.505728 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.505734 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.505741 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.505747 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.505754 | orchestrator | 2025-06-22 19:59:04.505761 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 19:59:04.505767 | orchestrator | Sunday 22 June 2025 19:49:00 +0000 (0:00:00.621) 0:01:16.118 *********** 2025-06-22 19:59:04.505774 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.505780 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.505787 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.505794 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.505800 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.505807 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.505813 | orchestrator | 2025-06-22 19:59:04.505820 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-22 19:59:04.505827 | orchestrator | Sunday 22 June 2025 19:49:01 +0000 (0:00:01.240) 0:01:17.358 *********** 2025-06-22 19:59:04.505833 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.505840 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.505847 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.505853 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.505860 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.505867 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.505873 | orchestrator | 2025-06-22 19:59:04.505880 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-22 19:59:04.505887 | orchestrator | Sunday 22 June 2025 19:49:02 +0000 (0:00:01.623) 0:01:18.982 *********** 2025-06-22 19:59:04.505893 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.505900 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.505907 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.505913 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.505920 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.505927 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.505933 | orchestrator | 2025-06-22 19:59:04.505940 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-22 19:59:04.505951 | orchestrator | Sunday 22 June 2025 19:49:04 +0000 (0:00:01.731) 0:01:20.714 *********** 2025-06-22 19:59:04.505958 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.505965 | orchestrator | 2025-06-22 19:59:04.505972 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-22 19:59:04.505979 | orchestrator | Sunday 22 June 2025 19:49:05 +0000 (0:00:01.308) 0:01:22.022 *********** 2025-06-22 19:59:04.505985 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.505992 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.505998 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.506005 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.506012 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.506171 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.506179 | orchestrator | 2025-06-22 19:59:04.506186 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-22 19:59:04.506193 | orchestrator | Sunday 22 June 2025 19:49:06 +0000 (0:00:00.870) 0:01:22.893 *********** 2025-06-22 19:59:04.506200 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.506206 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.506213 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.506241 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.506253 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.506274 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.506285 | orchestrator | 2025-06-22 19:59:04.506295 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-22 19:59:04.506302 | orchestrator | Sunday 22 June 2025 19:49:07 +0000 (0:00:00.561) 0:01:23.455 *********** 2025-06-22 19:59:04.506309 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 19:59:04.506315 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 19:59:04.506322 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 19:59:04.506328 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 19:59:04.506335 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 19:59:04.506341 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 19:59:04.506348 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 19:59:04.506355 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 19:59:04.506361 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 19:59:04.506368 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 19:59:04.506374 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 19:59:04.506405 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 19:59:04.506413 | orchestrator | 2025-06-22 19:59:04.506420 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-22 19:59:04.506426 | orchestrator | Sunday 22 June 2025 19:49:08 +0000 (0:00:01.525) 0:01:24.981 *********** 2025-06-22 19:59:04.506433 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.506440 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.506446 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.506453 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.506460 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.506466 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.506473 | orchestrator | 2025-06-22 19:59:04.506480 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-22 19:59:04.506486 | orchestrator | Sunday 22 June 2025 19:49:09 +0000 (0:00:00.887) 0:01:25.868 *********** 2025-06-22 19:59:04.506493 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.506500 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.506530 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.506541 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.506552 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.506563 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.506573 | orchestrator | 2025-06-22 19:59:04.506583 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-22 19:59:04.506592 | orchestrator | Sunday 22 June 2025 19:49:10 +0000 (0:00:00.859) 0:01:26.728 *********** 2025-06-22 19:59:04.506602 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.506644 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.506656 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.506662 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.506669 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.506676 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.506682 | orchestrator | 2025-06-22 19:59:04.506689 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-22 19:59:04.506696 | orchestrator | Sunday 22 June 2025 19:49:11 +0000 (0:00:00.579) 0:01:27.308 *********** 2025-06-22 19:59:04.506702 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.506709 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.506722 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.506729 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.506735 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.506742 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.506748 | orchestrator | 2025-06-22 19:59:04.506755 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-22 19:59:04.506762 | orchestrator | Sunday 22 June 2025 19:49:12 +0000 (0:00:00.863) 0:01:28.171 *********** 2025-06-22 19:59:04.506770 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.506778 | orchestrator | 2025-06-22 19:59:04.506791 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-22 19:59:04.506799 | orchestrator | Sunday 22 June 2025 19:49:13 +0000 (0:00:01.286) 0:01:29.458 *********** 2025-06-22 19:59:04.506806 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.506814 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.506822 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.506830 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.506838 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.506845 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.506853 | orchestrator | 2025-06-22 19:59:04.506861 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-22 19:59:04.506868 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:01:16.060) 0:02:45.518 *********** 2025-06-22 19:59:04.506876 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 19:59:04.506884 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 19:59:04.506892 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 19:59:04.506900 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.506908 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 19:59:04.506915 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 19:59:04.506923 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 19:59:04.506931 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.506938 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 19:59:04.506946 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 19:59:04.506954 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 19:59:04.506961 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.506969 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 19:59:04.506977 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 19:59:04.506984 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 19:59:04.506992 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.507000 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 19:59:04.507008 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 19:59:04.507016 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 19:59:04.507024 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.507031 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 19:59:04.507063 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 19:59:04.507072 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 19:59:04.507080 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.507088 | orchestrator | 2025-06-22 19:59:04.507096 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-22 19:59:04.507112 | orchestrator | Sunday 22 June 2025 19:50:30 +0000 (0:00:00.803) 0:02:46.322 *********** 2025-06-22 19:59:04.507120 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.507128 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.507134 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.507141 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.507148 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.507154 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.507161 | orchestrator | 2025-06-22 19:59:04.507168 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-22 19:59:04.507174 | orchestrator | Sunday 22 June 2025 19:50:30 +0000 (0:00:00.512) 0:02:46.834 *********** 2025-06-22 19:59:04.507181 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.507188 | orchestrator | 2025-06-22 19:59:04.507194 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-22 19:59:04.507201 | orchestrator | Sunday 22 June 2025 19:50:30 +0000 (0:00:00.124) 0:02:46.959 *********** 2025-06-22 19:59:04.507208 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.507214 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.507243 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.507250 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.507257 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.507263 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.507270 | orchestrator | 2025-06-22 19:59:04.507277 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-22 19:59:04.507283 | orchestrator | Sunday 22 June 2025 19:50:31 +0000 (0:00:00.805) 0:02:47.765 *********** 2025-06-22 19:59:04.507290 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.507297 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.507303 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.507310 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.507317 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.507323 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.507330 | orchestrator | 2025-06-22 19:59:04.507336 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-22 19:59:04.507343 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:00.606) 0:02:48.371 *********** 2025-06-22 19:59:04.507350 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.507356 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.507363 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.507369 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.507376 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.507382 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.507389 | orchestrator | 2025-06-22 19:59:04.507396 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-22 19:59:04.507407 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:00.738) 0:02:49.109 *********** 2025-06-22 19:59:04.507414 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.507420 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.507427 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.507434 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.507440 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.507447 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.507453 | orchestrator | 2025-06-22 19:59:04.507460 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-22 19:59:04.507467 | orchestrator | Sunday 22 June 2025 19:50:35 +0000 (0:00:02.213) 0:02:51.323 *********** 2025-06-22 19:59:04.507473 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.507480 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.507486 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.507493 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.507500 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.507506 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.507519 | orchestrator | 2025-06-22 19:59:04.507526 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-22 19:59:04.507532 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:00.772) 0:02:52.095 *********** 2025-06-22 19:59:04.507539 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.507546 | orchestrator | 2025-06-22 19:59:04.507553 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-22 19:59:04.507559 | orchestrator | Sunday 22 June 2025 19:50:37 +0000 (0:00:01.090) 0:02:53.185 *********** 2025-06-22 19:59:04.507566 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.507573 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.507579 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.507586 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.507592 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.507599 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.507605 | orchestrator | 2025-06-22 19:59:04.507612 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-22 19:59:04.507619 | orchestrator | Sunday 22 June 2025 19:50:37 +0000 (0:00:00.669) 0:02:53.855 *********** 2025-06-22 19:59:04.507625 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.507632 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.507639 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.507645 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.507652 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.507658 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.507665 | orchestrator | 2025-06-22 19:59:04.507672 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-22 19:59:04.507678 | orchestrator | Sunday 22 June 2025 19:50:38 +0000 (0:00:00.875) 0:02:54.731 *********** 2025-06-22 19:59:04.507685 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.507691 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.507698 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.507705 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.507711 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.507737 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.507745 | orchestrator | 2025-06-22 19:59:04.507751 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-22 19:59:04.507758 | orchestrator | Sunday 22 June 2025 19:50:39 +0000 (0:00:00.761) 0:02:55.493 *********** 2025-06-22 19:59:04.507765 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.507771 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.507778 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.507784 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.507791 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.507797 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.507804 | orchestrator | 2025-06-22 19:59:04.507811 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-22 19:59:04.507817 | orchestrator | Sunday 22 June 2025 19:50:40 +0000 (0:00:00.854) 0:02:56.347 *********** 2025-06-22 19:59:04.507824 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.507830 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.507837 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.507844 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.507850 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.507857 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.507863 | orchestrator | 2025-06-22 19:59:04.507870 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-22 19:59:04.507877 | orchestrator | Sunday 22 June 2025 19:50:40 +0000 (0:00:00.515) 0:02:56.862 *********** 2025-06-22 19:59:04.507884 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.507890 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.507902 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.507909 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.507915 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.507922 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.507928 | orchestrator | 2025-06-22 19:59:04.507935 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-22 19:59:04.507942 | orchestrator | Sunday 22 June 2025 19:50:41 +0000 (0:00:00.691) 0:02:57.553 *********** 2025-06-22 19:59:04.507949 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.507955 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.507962 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.507968 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.507975 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.507981 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.507988 | orchestrator | 2025-06-22 19:59:04.507995 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-22 19:59:04.508002 | orchestrator | Sunday 22 June 2025 19:50:42 +0000 (0:00:00.532) 0:02:58.086 *********** 2025-06-22 19:59:04.508008 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.508015 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.508021 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.508028 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.508034 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.508041 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.508048 | orchestrator | 2025-06-22 19:59:04.508058 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-22 19:59:04.508065 | orchestrator | Sunday 22 June 2025 19:50:42 +0000 (0:00:00.804) 0:02:58.891 *********** 2025-06-22 19:59:04.508072 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.508079 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.508085 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.508092 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.508099 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.508105 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.508112 | orchestrator | 2025-06-22 19:59:04.508119 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-22 19:59:04.508126 | orchestrator | Sunday 22 June 2025 19:50:43 +0000 (0:00:01.130) 0:03:00.022 *********** 2025-06-22 19:59:04.508138 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.508149 | orchestrator | 2025-06-22 19:59:04.508160 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-22 19:59:04.508171 | orchestrator | Sunday 22 June 2025 19:50:44 +0000 (0:00:00.995) 0:03:01.017 *********** 2025-06-22 19:59:04.508182 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-22 19:59:04.508193 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-22 19:59:04.508204 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-22 19:59:04.508216 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-22 19:59:04.508266 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-22 19:59:04.508273 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-22 19:59:04.508279 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-22 19:59:04.508286 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-22 19:59:04.508293 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-22 19:59:04.508300 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-22 19:59:04.508306 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-22 19:59:04.508313 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-22 19:59:04.508320 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-22 19:59:04.508326 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-22 19:59:04.508339 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-22 19:59:04.508346 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-22 19:59:04.508353 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-22 19:59:04.508359 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-22 19:59:04.508366 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-22 19:59:04.508373 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-22 19:59:04.508401 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-22 19:59:04.508409 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-22 19:59:04.508415 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-22 19:59:04.508422 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-22 19:59:04.508429 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-22 19:59:04.508435 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-22 19:59:04.508442 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-22 19:59:04.508448 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-22 19:59:04.508455 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-22 19:59:04.508462 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-22 19:59:04.508469 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-22 19:59:04.508475 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-22 19:59:04.508482 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-22 19:59:04.508488 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-22 19:59:04.508495 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-22 19:59:04.508502 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-22 19:59:04.508508 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-22 19:59:04.508515 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-22 19:59:04.508522 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-22 19:59:04.508528 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-22 19:59:04.508535 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-22 19:59:04.508542 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-22 19:59:04.508549 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-22 19:59:04.508555 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-22 19:59:04.508562 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-22 19:59:04.508569 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 19:59:04.508575 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-22 19:59:04.508585 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 19:59:04.508597 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-22 19:59:04.508608 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-22 19:59:04.508624 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 19:59:04.508635 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 19:59:04.508644 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 19:59:04.508655 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 19:59:04.508665 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 19:59:04.508677 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 19:59:04.508693 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 19:59:04.508699 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 19:59:04.508706 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 19:59:04.508712 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 19:59:04.508719 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 19:59:04.508726 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 19:59:04.508732 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 19:59:04.508739 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 19:59:04.508745 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 19:59:04.508751 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 19:59:04.508757 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 19:59:04.508764 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 19:59:04.508770 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 19:59:04.508776 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 19:59:04.508782 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 19:59:04.508788 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 19:59:04.508794 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 19:59:04.508800 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 19:59:04.508806 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 19:59:04.508812 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 19:59:04.508819 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 19:59:04.508825 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 19:59:04.508849 | orchestrator | [0;32025-06-22 19:59:04 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:04.508857 | orchestrator | 2025-06-22 19:59:04 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:04.508863 | orchestrator | 2025-06-22 19:59:04 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:59:04.508869 | orchestrator | 2025-06-22 19:59:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:04.508875 | orchestrator | 3mchanged: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 19:59:04.508882 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 19:59:04.508888 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 19:59:04.508894 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-22 19:59:04.508900 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-22 19:59:04.508906 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 19:59:04.508913 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-22 19:59:04.508919 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 19:59:04.508925 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-22 19:59:04.508931 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-22 19:59:04.508937 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-22 19:59:04.508943 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-22 19:59:04.508949 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-22 19:59:04.508960 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-22 19:59:04.508967 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 19:59:04.508973 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-22 19:59:04.508979 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-22 19:59:04.508985 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-22 19:59:04.508991 | orchestrator | 2025-06-22 19:59:04.508997 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-22 19:59:04.509004 | orchestrator | Sunday 22 June 2025 19:50:50 +0000 (0:00:05.984) 0:03:07.001 *********** 2025-06-22 19:59:04.509010 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509016 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509022 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509035 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.509042 | orchestrator | 2025-06-22 19:59:04.509048 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-22 19:59:04.509055 | orchestrator | Sunday 22 June 2025 19:50:51 +0000 (0:00:00.888) 0:03:07.890 *********** 2025-06-22 19:59:04.509061 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.509068 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.509074 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.509080 | orchestrator | 2025-06-22 19:59:04.509086 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-22 19:59:04.509092 | orchestrator | Sunday 22 June 2025 19:50:52 +0000 (0:00:00.733) 0:03:08.623 *********** 2025-06-22 19:59:04.509099 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.509105 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.509111 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.509117 | orchestrator | 2025-06-22 19:59:04.509124 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-22 19:59:04.509130 | orchestrator | Sunday 22 June 2025 19:50:53 +0000 (0:00:01.317) 0:03:09.940 *********** 2025-06-22 19:59:04.509136 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.509142 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.509148 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.509154 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509160 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509167 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509173 | orchestrator | 2025-06-22 19:59:04.509179 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-22 19:59:04.509185 | orchestrator | Sunday 22 June 2025 19:50:54 +0000 (0:00:00.448) 0:03:10.389 *********** 2025-06-22 19:59:04.509191 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.509197 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.509204 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.509210 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509216 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509241 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509247 | orchestrator | 2025-06-22 19:59:04.509254 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-22 19:59:04.509277 | orchestrator | Sunday 22 June 2025 19:50:54 +0000 (0:00:00.611) 0:03:11.000 *********** 2025-06-22 19:59:04.509290 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.509296 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.509302 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.509308 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509314 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509321 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509327 | orchestrator | 2025-06-22 19:59:04.509333 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-22 19:59:04.509339 | orchestrator | Sunday 22 June 2025 19:50:55 +0000 (0:00:00.505) 0:03:11.505 *********** 2025-06-22 19:59:04.509345 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.509352 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.509358 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.509364 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509370 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509376 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509382 | orchestrator | 2025-06-22 19:59:04.509388 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-22 19:59:04.509395 | orchestrator | Sunday 22 June 2025 19:50:56 +0000 (0:00:00.676) 0:03:12.182 *********** 2025-06-22 19:59:04.509401 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.509407 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.509413 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.509419 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509426 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509432 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509438 | orchestrator | 2025-06-22 19:59:04.509444 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-22 19:59:04.509450 | orchestrator | Sunday 22 June 2025 19:50:56 +0000 (0:00:00.503) 0:03:12.685 *********** 2025-06-22 19:59:04.509456 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.509462 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.509468 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.509475 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509481 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509487 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509493 | orchestrator | 2025-06-22 19:59:04.509499 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-22 19:59:04.509505 | orchestrator | Sunday 22 June 2025 19:50:57 +0000 (0:00:00.686) 0:03:13.371 *********** 2025-06-22 19:59:04.509512 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.509518 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.509524 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.509530 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509536 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509542 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509548 | orchestrator | 2025-06-22 19:59:04.509558 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-22 19:59:04.509564 | orchestrator | Sunday 22 June 2025 19:50:57 +0000 (0:00:00.575) 0:03:13.947 *********** 2025-06-22 19:59:04.509571 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.509577 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.509583 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.509589 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509595 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509601 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509607 | orchestrator | 2025-06-22 19:59:04.509613 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-22 19:59:04.509619 | orchestrator | Sunday 22 June 2025 19:50:58 +0000 (0:00:00.771) 0:03:14.718 *********** 2025-06-22 19:59:04.509625 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509635 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509641 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509647 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.509653 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.509659 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.509666 | orchestrator | 2025-06-22 19:59:04.509672 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-22 19:59:04.509678 | orchestrator | Sunday 22 June 2025 19:51:01 +0000 (0:00:02.537) 0:03:17.256 *********** 2025-06-22 19:59:04.509684 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.509690 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.509696 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.509703 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509709 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509715 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509721 | orchestrator | 2025-06-22 19:59:04.509727 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-22 19:59:04.509734 | orchestrator | Sunday 22 June 2025 19:51:01 +0000 (0:00:00.763) 0:03:18.019 *********** 2025-06-22 19:59:04.509740 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.509746 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.509752 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.509758 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509764 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509771 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509777 | orchestrator | 2025-06-22 19:59:04.509783 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-22 19:59:04.509789 | orchestrator | Sunday 22 June 2025 19:51:02 +0000 (0:00:00.578) 0:03:18.598 *********** 2025-06-22 19:59:04.509795 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.509801 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.509808 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.509814 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509820 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509826 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509832 | orchestrator | 2025-06-22 19:59:04.509838 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-22 19:59:04.509844 | orchestrator | Sunday 22 June 2025 19:51:03 +0000 (0:00:00.768) 0:03:19.366 *********** 2025-06-22 19:59:04.509865 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.509873 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.509879 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.509885 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.509891 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.509897 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.509903 | orchestrator | 2025-06-22 19:59:04.509910 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-22 19:59:04.509916 | orchestrator | Sunday 22 June 2025 19:51:03 +0000 (0:00:00.555) 0:03:19.921 *********** 2025-06-22 19:59:04.509923 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-22 19:59:04.509932 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-22 19:59:04.509946 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.509952 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-22 19:59:04.509959 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-22 19:59:04.509965 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.509975 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-22 19:59:04.509982 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-22 19:59:04.509988 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.509994 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.510000 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.510006 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.510012 | orchestrator | 2025-06-22 19:59:04.510045 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-22 19:59:04.510051 | orchestrator | Sunday 22 June 2025 19:51:04 +0000 (0:00:00.785) 0:03:20.707 *********** 2025-06-22 19:59:04.510057 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.510063 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.510069 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.510076 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.510082 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.510088 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.510094 | orchestrator | 2025-06-22 19:59:04.510100 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-22 19:59:04.510106 | orchestrator | Sunday 22 June 2025 19:51:05 +0000 (0:00:00.568) 0:03:21.275 *********** 2025-06-22 19:59:04.510112 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.510119 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.510125 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.510131 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.510137 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.510143 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.510149 | orchestrator | 2025-06-22 19:59:04.510155 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-22 19:59:04.510161 | orchestrator | Sunday 22 June 2025 19:51:05 +0000 (0:00:00.690) 0:03:21.966 *********** 2025-06-22 19:59:04.510168 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.510174 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.510180 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.510186 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.510192 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.510198 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.510204 | orchestrator | 2025-06-22 19:59:04.510210 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-22 19:59:04.510255 | orchestrator | Sunday 22 June 2025 19:51:06 +0000 (0:00:00.562) 0:03:22.528 *********** 2025-06-22 19:59:04.510270 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.510276 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.510282 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.510288 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.510294 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.510300 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.510306 | orchestrator | 2025-06-22 19:59:04.510313 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-22 19:59:04.510319 | orchestrator | Sunday 22 June 2025 19:51:07 +0000 (0:00:00.734) 0:03:23.263 *********** 2025-06-22 19:59:04.510325 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.510331 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.510337 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.510344 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.510350 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.510356 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.510362 | orchestrator | 2025-06-22 19:59:04.510368 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-22 19:59:04.510375 | orchestrator | Sunday 22 June 2025 19:51:07 +0000 (0:00:00.552) 0:03:23.815 *********** 2025-06-22 19:59:04.510381 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.510387 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.510393 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.510399 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.510405 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.510412 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.510418 | orchestrator | 2025-06-22 19:59:04.510424 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-22 19:59:04.510430 | orchestrator | Sunday 22 June 2025 19:51:08 +0000 (0:00:00.815) 0:03:24.630 *********** 2025-06-22 19:59:04.510436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.510442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.510449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.510455 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.510461 | orchestrator | 2025-06-22 19:59:04.510467 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-22 19:59:04.510473 | orchestrator | Sunday 22 June 2025 19:51:09 +0000 (0:00:00.495) 0:03:25.126 *********** 2025-06-22 19:59:04.510480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.510486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.510492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.510498 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.510504 | orchestrator | 2025-06-22 19:59:04.510510 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-22 19:59:04.510517 | orchestrator | Sunday 22 June 2025 19:51:09 +0000 (0:00:00.370) 0:03:25.497 *********** 2025-06-22 19:59:04.510527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.510533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.510539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.510546 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.510552 | orchestrator | 2025-06-22 19:59:04.510558 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-22 19:59:04.510564 | orchestrator | Sunday 22 June 2025 19:51:09 +0000 (0:00:00.385) 0:03:25.882 *********** 2025-06-22 19:59:04.510571 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.510577 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.510583 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.510589 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.510595 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.510602 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.510613 | orchestrator | 2025-06-22 19:59:04.510619 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-22 19:59:04.510625 | orchestrator | Sunday 22 June 2025 19:51:10 +0000 (0:00:00.598) 0:03:26.481 *********** 2025-06-22 19:59:04.510631 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 19:59:04.510638 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-22 19:59:04.510644 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-22 19:59:04.510650 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-22 19:59:04.510656 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.510662 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-22 19:59:04.510671 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.510681 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-22 19:59:04.510692 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.510702 | orchestrator | 2025-06-22 19:59:04.510712 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-22 19:59:04.510721 | orchestrator | Sunday 22 June 2025 19:51:12 +0000 (0:00:01.956) 0:03:28.437 *********** 2025-06-22 19:59:04.510730 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.510740 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.510749 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.510758 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.510768 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.510777 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.510786 | orchestrator | 2025-06-22 19:59:04.510796 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 19:59:04.510806 | orchestrator | Sunday 22 June 2025 19:51:15 +0000 (0:00:02.671) 0:03:31.108 *********** 2025-06-22 19:59:04.510816 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.510826 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.510836 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.510846 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.510856 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.510867 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.510877 | orchestrator | 2025-06-22 19:59:04.510887 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-22 19:59:04.510898 | orchestrator | Sunday 22 June 2025 19:51:16 +0000 (0:00:01.160) 0:03:32.269 *********** 2025-06-22 19:59:04.510909 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.510954 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.510962 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.510968 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.510975 | orchestrator | 2025-06-22 19:59:04.510981 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-22 19:59:04.510987 | orchestrator | Sunday 22 June 2025 19:51:17 +0000 (0:00:00.917) 0:03:33.187 *********** 2025-06-22 19:59:04.510993 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.511000 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.511006 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.511012 | orchestrator | 2025-06-22 19:59:04.511018 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-22 19:59:04.511025 | orchestrator | Sunday 22 June 2025 19:51:17 +0000 (0:00:00.353) 0:03:33.540 *********** 2025-06-22 19:59:04.511031 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.511037 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.511043 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.511049 | orchestrator | 2025-06-22 19:59:04.511055 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-22 19:59:04.511062 | orchestrator | Sunday 22 June 2025 19:51:18 +0000 (0:00:01.403) 0:03:34.944 *********** 2025-06-22 19:59:04.511068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 19:59:04.511074 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 19:59:04.511087 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 19:59:04.511093 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.511099 | orchestrator | 2025-06-22 19:59:04.511105 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-22 19:59:04.511111 | orchestrator | Sunday 22 June 2025 19:51:19 +0000 (0:00:00.643) 0:03:35.587 *********** 2025-06-22 19:59:04.511118 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.511124 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.511130 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.511136 | orchestrator | 2025-06-22 19:59:04.511142 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-22 19:59:04.511149 | orchestrator | Sunday 22 June 2025 19:51:19 +0000 (0:00:00.279) 0:03:35.866 *********** 2025-06-22 19:59:04.511155 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.511161 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.511167 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.511173 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.511180 | orchestrator | 2025-06-22 19:59:04.511186 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-22 19:59:04.511192 | orchestrator | Sunday 22 June 2025 19:51:20 +0000 (0:00:00.914) 0:03:36.781 *********** 2025-06-22 19:59:04.511203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.511210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.511216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.511244 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511251 | orchestrator | 2025-06-22 19:59:04.511257 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-22 19:59:04.511263 | orchestrator | Sunday 22 June 2025 19:51:21 +0000 (0:00:00.358) 0:03:37.139 *********** 2025-06-22 19:59:04.511269 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511276 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.511282 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.511288 | orchestrator | 2025-06-22 19:59:04.511294 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-22 19:59:04.511300 | orchestrator | Sunday 22 June 2025 19:51:21 +0000 (0:00:00.320) 0:03:37.459 *********** 2025-06-22 19:59:04.511306 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511313 | orchestrator | 2025-06-22 19:59:04.511319 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-22 19:59:04.511325 | orchestrator | Sunday 22 June 2025 19:51:21 +0000 (0:00:00.221) 0:03:37.681 *********** 2025-06-22 19:59:04.511331 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511338 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.511344 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.511350 | orchestrator | 2025-06-22 19:59:04.511356 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-22 19:59:04.511362 | orchestrator | Sunday 22 June 2025 19:51:21 +0000 (0:00:00.277) 0:03:37.958 *********** 2025-06-22 19:59:04.511369 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511375 | orchestrator | 2025-06-22 19:59:04.511381 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-22 19:59:04.511387 | orchestrator | Sunday 22 June 2025 19:51:22 +0000 (0:00:00.196) 0:03:38.154 *********** 2025-06-22 19:59:04.511396 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511406 | orchestrator | 2025-06-22 19:59:04.511416 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-22 19:59:04.511426 | orchestrator | Sunday 22 June 2025 19:51:22 +0000 (0:00:00.193) 0:03:38.348 *********** 2025-06-22 19:59:04.511436 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511446 | orchestrator | 2025-06-22 19:59:04.511454 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-22 19:59:04.511470 | orchestrator | Sunday 22 June 2025 19:51:22 +0000 (0:00:00.288) 0:03:38.636 *********** 2025-06-22 19:59:04.511481 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511490 | orchestrator | 2025-06-22 19:59:04.511501 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-22 19:59:04.511510 | orchestrator | Sunday 22 June 2025 19:51:22 +0000 (0:00:00.194) 0:03:38.831 *********** 2025-06-22 19:59:04.511517 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511523 | orchestrator | 2025-06-22 19:59:04.511529 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-22 19:59:04.511535 | orchestrator | Sunday 22 June 2025 19:51:23 +0000 (0:00:00.198) 0:03:39.030 *********** 2025-06-22 19:59:04.511562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.511569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.511576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.511582 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511588 | orchestrator | 2025-06-22 19:59:04.511594 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-22 19:59:04.511600 | orchestrator | Sunday 22 June 2025 19:51:23 +0000 (0:00:00.350) 0:03:39.381 *********** 2025-06-22 19:59:04.511607 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511613 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.511619 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.511625 | orchestrator | 2025-06-22 19:59:04.511631 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-22 19:59:04.511637 | orchestrator | Sunday 22 June 2025 19:51:23 +0000 (0:00:00.275) 0:03:39.656 *********** 2025-06-22 19:59:04.511644 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511650 | orchestrator | 2025-06-22 19:59:04.511656 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-22 19:59:04.511662 | orchestrator | Sunday 22 June 2025 19:51:23 +0000 (0:00:00.202) 0:03:39.858 *********** 2025-06-22 19:59:04.511668 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511674 | orchestrator | 2025-06-22 19:59:04.511681 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-22 19:59:04.511687 | orchestrator | Sunday 22 June 2025 19:51:24 +0000 (0:00:00.207) 0:03:40.066 *********** 2025-06-22 19:59:04.511693 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.511699 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.511705 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.511711 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.511718 | orchestrator | 2025-06-22 19:59:04.511724 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-22 19:59:04.511730 | orchestrator | Sunday 22 June 2025 19:51:25 +0000 (0:00:01.064) 0:03:41.130 *********** 2025-06-22 19:59:04.511736 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.511742 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.511749 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.511755 | orchestrator | 2025-06-22 19:59:04.511761 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-22 19:59:04.511767 | orchestrator | Sunday 22 June 2025 19:51:25 +0000 (0:00:00.428) 0:03:41.559 *********** 2025-06-22 19:59:04.511773 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.511779 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.511786 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.511792 | orchestrator | 2025-06-22 19:59:04.511798 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-22 19:59:04.511808 | orchestrator | Sunday 22 June 2025 19:51:26 +0000 (0:00:01.176) 0:03:42.735 *********** 2025-06-22 19:59:04.511814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.511821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.511832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.511838 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.511844 | orchestrator | 2025-06-22 19:59:04.511850 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-22 19:59:04.511857 | orchestrator | Sunday 22 June 2025 19:51:27 +0000 (0:00:00.911) 0:03:43.647 *********** 2025-06-22 19:59:04.511863 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.511869 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.511875 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.511881 | orchestrator | 2025-06-22 19:59:04.511887 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-22 19:59:04.511894 | orchestrator | Sunday 22 June 2025 19:51:27 +0000 (0:00:00.291) 0:03:43.938 *********** 2025-06-22 19:59:04.511900 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.511906 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.511912 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.511918 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.511924 | orchestrator | 2025-06-22 19:59:04.511931 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-22 19:59:04.511937 | orchestrator | Sunday 22 June 2025 19:51:28 +0000 (0:00:00.862) 0:03:44.801 *********** 2025-06-22 19:59:04.511943 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.511949 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.511955 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.511961 | orchestrator | 2025-06-22 19:59:04.511968 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-22 19:59:04.511974 | orchestrator | Sunday 22 June 2025 19:51:29 +0000 (0:00:00.287) 0:03:45.089 *********** 2025-06-22 19:59:04.511980 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.511986 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.511992 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.511998 | orchestrator | 2025-06-22 19:59:04.512005 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-22 19:59:04.512011 | orchestrator | Sunday 22 June 2025 19:51:30 +0000 (0:00:01.124) 0:03:46.213 *********** 2025-06-22 19:59:04.512017 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.512023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.512029 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.512036 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.512042 | orchestrator | 2025-06-22 19:59:04.512048 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-22 19:59:04.512054 | orchestrator | Sunday 22 June 2025 19:51:30 +0000 (0:00:00.694) 0:03:46.907 *********** 2025-06-22 19:59:04.512060 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.512066 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.512073 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.512079 | orchestrator | 2025-06-22 19:59:04.512100 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-22 19:59:04.512107 | orchestrator | Sunday 22 June 2025 19:51:31 +0000 (0:00:00.332) 0:03:47.240 *********** 2025-06-22 19:59:04.512114 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.512120 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.512126 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.512132 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.512138 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.512144 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.512151 | orchestrator | 2025-06-22 19:59:04.512157 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-22 19:59:04.512163 | orchestrator | Sunday 22 June 2025 19:51:31 +0000 (0:00:00.719) 0:03:47.959 *********** 2025-06-22 19:59:04.512169 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.512180 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.512186 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.512192 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.512199 | orchestrator | 2025-06-22 19:59:04.512205 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-22 19:59:04.512211 | orchestrator | Sunday 22 June 2025 19:51:32 +0000 (0:00:00.908) 0:03:48.868 *********** 2025-06-22 19:59:04.512255 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.512263 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.512269 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.512275 | orchestrator | 2025-06-22 19:59:04.512282 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-22 19:59:04.512288 | orchestrator | Sunday 22 June 2025 19:51:33 +0000 (0:00:00.321) 0:03:49.189 *********** 2025-06-22 19:59:04.512294 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.512300 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.512306 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.512312 | orchestrator | 2025-06-22 19:59:04.512319 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-22 19:59:04.512325 | orchestrator | Sunday 22 June 2025 19:51:34 +0000 (0:00:01.085) 0:03:50.275 *********** 2025-06-22 19:59:04.512331 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 19:59:04.512337 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 19:59:04.512343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 19:59:04.512350 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.512356 | orchestrator | 2025-06-22 19:59:04.512362 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-22 19:59:04.512368 | orchestrator | Sunday 22 June 2025 19:51:34 +0000 (0:00:00.698) 0:03:50.973 *********** 2025-06-22 19:59:04.512374 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.512381 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.512387 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.512393 | orchestrator | 2025-06-22 19:59:04.512406 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-22 19:59:04.512413 | orchestrator | 2025-06-22 19:59:04.512419 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 19:59:04.512425 | orchestrator | Sunday 22 June 2025 19:51:35 +0000 (0:00:00.677) 0:03:51.651 *********** 2025-06-22 19:59:04.512432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.512439 | orchestrator | 2025-06-22 19:59:04.512445 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 19:59:04.512451 | orchestrator | Sunday 22 June 2025 19:51:36 +0000 (0:00:00.462) 0:03:52.113 *********** 2025-06-22 19:59:04.512457 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.512464 | orchestrator | 2025-06-22 19:59:04.512470 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 19:59:04.512476 | orchestrator | Sunday 22 June 2025 19:51:36 +0000 (0:00:00.640) 0:03:52.754 *********** 2025-06-22 19:59:04.512482 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.512488 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.512494 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.512501 | orchestrator | 2025-06-22 19:59:04.512507 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 19:59:04.512513 | orchestrator | Sunday 22 June 2025 19:51:37 +0000 (0:00:00.723) 0:03:53.477 *********** 2025-06-22 19:59:04.512519 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.512526 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.512532 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.512543 | orchestrator | 2025-06-22 19:59:04.512549 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 19:59:04.512555 | orchestrator | Sunday 22 June 2025 19:51:37 +0000 (0:00:00.297) 0:03:53.774 *********** 2025-06-22 19:59:04.512561 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.512567 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.512573 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.512579 | orchestrator | 2025-06-22 19:59:04.512584 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 19:59:04.512590 | orchestrator | Sunday 22 June 2025 19:51:38 +0000 (0:00:00.283) 0:03:54.058 *********** 2025-06-22 19:59:04.512595 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.512600 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.512606 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.512611 | orchestrator | 2025-06-22 19:59:04.512616 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 19:59:04.512622 | orchestrator | Sunday 22 June 2025 19:51:38 +0000 (0:00:00.510) 0:03:54.569 *********** 2025-06-22 19:59:04.512627 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.512633 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.512638 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.512643 | orchestrator | 2025-06-22 19:59:04.512649 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 19:59:04.512670 | orchestrator | Sunday 22 June 2025 19:51:39 +0000 (0:00:00.848) 0:03:55.418 *********** 2025-06-22 19:59:04.512676 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.512681 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.512687 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.512692 | orchestrator | 2025-06-22 19:59:04.512698 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 19:59:04.512703 | orchestrator | Sunday 22 June 2025 19:51:39 +0000 (0:00:00.302) 0:03:55.720 *********** 2025-06-22 19:59:04.512709 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.512715 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.512720 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.512725 | orchestrator | 2025-06-22 19:59:04.512731 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 19:59:04.512736 | orchestrator | Sunday 22 June 2025 19:51:40 +0000 (0:00:00.327) 0:03:56.048 *********** 2025-06-22 19:59:04.512742 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.512747 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.512753 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.512758 | orchestrator | 2025-06-22 19:59:04.512764 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 19:59:04.512769 | orchestrator | Sunday 22 June 2025 19:51:41 +0000 (0:00:01.021) 0:03:57.069 *********** 2025-06-22 19:59:04.512775 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.512780 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.512786 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.512791 | orchestrator | 2025-06-22 19:59:04.512797 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 19:59:04.512802 | orchestrator | Sunday 22 June 2025 19:51:41 +0000 (0:00:00.707) 0:03:57.776 *********** 2025-06-22 19:59:04.512808 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.512813 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.512818 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.512824 | orchestrator | 2025-06-22 19:59:04.512829 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 19:59:04.512835 | orchestrator | Sunday 22 June 2025 19:51:42 +0000 (0:00:00.263) 0:03:58.040 *********** 2025-06-22 19:59:04.512840 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.512846 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.512851 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.512856 | orchestrator | 2025-06-22 19:59:04.512862 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 19:59:04.512871 | orchestrator | Sunday 22 June 2025 19:51:42 +0000 (0:00:00.280) 0:03:58.320 *********** 2025-06-22 19:59:04.512876 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.512882 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.512887 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.512893 | orchestrator | 2025-06-22 19:59:04.512898 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 19:59:04.512904 | orchestrator | Sunday 22 June 2025 19:51:42 +0000 (0:00:00.443) 0:03:58.764 *********** 2025-06-22 19:59:04.512909 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.512918 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.512923 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.512929 | orchestrator | 2025-06-22 19:59:04.512935 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 19:59:04.512940 | orchestrator | Sunday 22 June 2025 19:51:43 +0000 (0:00:00.280) 0:03:59.045 *********** 2025-06-22 19:59:04.512946 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.512951 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.512956 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.512962 | orchestrator | 2025-06-22 19:59:04.512967 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 19:59:04.512973 | orchestrator | Sunday 22 June 2025 19:51:43 +0000 (0:00:00.287) 0:03:59.332 *********** 2025-06-22 19:59:04.512978 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.512984 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.512989 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.512994 | orchestrator | 2025-06-22 19:59:04.513000 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 19:59:04.513005 | orchestrator | Sunday 22 June 2025 19:51:43 +0000 (0:00:00.303) 0:03:59.635 *********** 2025-06-22 19:59:04.513011 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.513016 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.513021 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.513027 | orchestrator | 2025-06-22 19:59:04.513032 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 19:59:04.513038 | orchestrator | Sunday 22 June 2025 19:51:44 +0000 (0:00:00.479) 0:04:00.115 *********** 2025-06-22 19:59:04.513043 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.513049 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.513054 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.513059 | orchestrator | 2025-06-22 19:59:04.513065 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 19:59:04.513070 | orchestrator | Sunday 22 June 2025 19:51:44 +0000 (0:00:00.396) 0:04:00.512 *********** 2025-06-22 19:59:04.513075 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.513081 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.513086 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.513092 | orchestrator | 2025-06-22 19:59:04.513097 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 19:59:04.513103 | orchestrator | Sunday 22 June 2025 19:51:44 +0000 (0:00:00.368) 0:04:00.880 *********** 2025-06-22 19:59:04.513108 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.513114 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.513119 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.513124 | orchestrator | 2025-06-22 19:59:04.513130 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-22 19:59:04.513135 | orchestrator | Sunday 22 June 2025 19:51:45 +0000 (0:00:00.745) 0:04:01.625 *********** 2025-06-22 19:59:04.513141 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.513146 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.513151 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.513157 | orchestrator | 2025-06-22 19:59:04.513162 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-22 19:59:04.513168 | orchestrator | Sunday 22 June 2025 19:51:45 +0000 (0:00:00.383) 0:04:02.008 *********** 2025-06-22 19:59:04.513193 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.513199 | orchestrator | 2025-06-22 19:59:04.513205 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-22 19:59:04.513210 | orchestrator | Sunday 22 June 2025 19:51:46 +0000 (0:00:00.539) 0:04:02.548 *********** 2025-06-22 19:59:04.513216 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.513232 | orchestrator | 2025-06-22 19:59:04.513238 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-22 19:59:04.513243 | orchestrator | Sunday 22 June 2025 19:51:46 +0000 (0:00:00.115) 0:04:02.663 *********** 2025-06-22 19:59:04.513249 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:59:04.513254 | orchestrator | 2025-06-22 19:59:04.513260 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-22 19:59:04.513265 | orchestrator | Sunday 22 June 2025 19:51:47 +0000 (0:00:01.337) 0:04:04.001 *********** 2025-06-22 19:59:04.513271 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.513276 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.513282 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.513287 | orchestrator | 2025-06-22 19:59:04.513292 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-22 19:59:04.513298 | orchestrator | Sunday 22 June 2025 19:51:48 +0000 (0:00:00.306) 0:04:04.307 *********** 2025-06-22 19:59:04.513304 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.513309 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.513314 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.513320 | orchestrator | 2025-06-22 19:59:04.513325 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-22 19:59:04.513331 | orchestrator | Sunday 22 June 2025 19:51:48 +0000 (0:00:00.291) 0:04:04.599 *********** 2025-06-22 19:59:04.513336 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.513342 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.513347 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.513353 | orchestrator | 2025-06-22 19:59:04.513358 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-22 19:59:04.513363 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:01.128) 0:04:05.727 *********** 2025-06-22 19:59:04.513369 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.513374 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.513380 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.513385 | orchestrator | 2025-06-22 19:59:04.513391 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-22 19:59:04.513396 | orchestrator | Sunday 22 June 2025 19:51:50 +0000 (0:00:01.001) 0:04:06.729 *********** 2025-06-22 19:59:04.513401 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.513407 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.513412 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.513418 | orchestrator | 2025-06-22 19:59:04.513423 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-22 19:59:04.513432 | orchestrator | Sunday 22 June 2025 19:51:51 +0000 (0:00:00.715) 0:04:07.445 *********** 2025-06-22 19:59:04.513438 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.513443 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.513449 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.513454 | orchestrator | 2025-06-22 19:59:04.513460 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-22 19:59:04.513465 | orchestrator | Sunday 22 June 2025 19:51:52 +0000 (0:00:00.701) 0:04:08.147 *********** 2025-06-22 19:59:04.513471 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.513476 | orchestrator | 2025-06-22 19:59:04.513482 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-22 19:59:04.513487 | orchestrator | Sunday 22 June 2025 19:51:53 +0000 (0:00:01.177) 0:04:09.324 *********** 2025-06-22 19:59:04.513493 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.513502 | orchestrator | 2025-06-22 19:59:04.513507 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-22 19:59:04.513513 | orchestrator | Sunday 22 June 2025 19:51:53 +0000 (0:00:00.603) 0:04:09.928 *********** 2025-06-22 19:59:04.513518 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 19:59:04.513524 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.513529 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.513535 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-22 19:59:04.513540 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 19:59:04.513545 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 19:59:04.513551 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 19:59:04.513556 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-22 19:59:04.513562 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 19:59:04.513567 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-06-22 19:59:04.513573 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-22 19:59:04.513578 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-22 19:59:04.513584 | orchestrator | 2025-06-22 19:59:04.513589 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-22 19:59:04.513595 | orchestrator | Sunday 22 June 2025 19:51:57 +0000 (0:00:03.203) 0:04:13.132 *********** 2025-06-22 19:59:04.513600 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.513606 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.513611 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.513616 | orchestrator | 2025-06-22 19:59:04.513622 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-22 19:59:04.513627 | orchestrator | Sunday 22 June 2025 19:51:58 +0000 (0:00:01.446) 0:04:14.579 *********** 2025-06-22 19:59:04.513633 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.513638 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.513644 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.513649 | orchestrator | 2025-06-22 19:59:04.513654 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-22 19:59:04.513674 | orchestrator | Sunday 22 June 2025 19:51:58 +0000 (0:00:00.309) 0:04:14.889 *********** 2025-06-22 19:59:04.513681 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.513686 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.513691 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.513697 | orchestrator | 2025-06-22 19:59:04.513702 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-22 19:59:04.513708 | orchestrator | Sunday 22 June 2025 19:51:59 +0000 (0:00:00.401) 0:04:15.290 *********** 2025-06-22 19:59:04.513713 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.513719 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.513724 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.513729 | orchestrator | 2025-06-22 19:59:04.513735 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-22 19:59:04.513740 | orchestrator | Sunday 22 June 2025 19:52:01 +0000 (0:00:01.862) 0:04:17.153 *********** 2025-06-22 19:59:04.513746 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.513751 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.513756 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.513762 | orchestrator | 2025-06-22 19:59:04.513767 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-22 19:59:04.513773 | orchestrator | Sunday 22 June 2025 19:52:02 +0000 (0:00:01.644) 0:04:18.797 *********** 2025-06-22 19:59:04.513778 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.513783 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.513789 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.513794 | orchestrator | 2025-06-22 19:59:04.513799 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-22 19:59:04.513811 | orchestrator | Sunday 22 June 2025 19:52:03 +0000 (0:00:00.360) 0:04:19.157 *********** 2025-06-22 19:59:04.513816 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.513821 | orchestrator | 2025-06-22 19:59:04.513827 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-22 19:59:04.513832 | orchestrator | Sunday 22 June 2025 19:52:03 +0000 (0:00:00.584) 0:04:19.741 *********** 2025-06-22 19:59:04.513838 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.513843 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.513848 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.513854 | orchestrator | 2025-06-22 19:59:04.513859 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-22 19:59:04.513865 | orchestrator | Sunday 22 June 2025 19:52:04 +0000 (0:00:00.599) 0:04:20.341 *********** 2025-06-22 19:59:04.513870 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.513875 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.513881 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.513886 | orchestrator | 2025-06-22 19:59:04.513891 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-22 19:59:04.513897 | orchestrator | Sunday 22 June 2025 19:52:04 +0000 (0:00:00.332) 0:04:20.674 *********** 2025-06-22 19:59:04.513905 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.513911 | orchestrator | 2025-06-22 19:59:04.513916 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-22 19:59:04.513922 | orchestrator | Sunday 22 June 2025 19:52:05 +0000 (0:00:00.432) 0:04:21.106 *********** 2025-06-22 19:59:04.513927 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.513933 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.513938 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.513943 | orchestrator | 2025-06-22 19:59:04.513949 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-22 19:59:04.513954 | orchestrator | Sunday 22 June 2025 19:52:06 +0000 (0:00:01.805) 0:04:22.911 *********** 2025-06-22 19:59:04.513959 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.513965 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.513970 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.513975 | orchestrator | 2025-06-22 19:59:04.513981 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-22 19:59:04.513986 | orchestrator | Sunday 22 June 2025 19:52:07 +0000 (0:00:01.074) 0:04:23.986 *********** 2025-06-22 19:59:04.513992 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.513997 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.514002 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.514008 | orchestrator | 2025-06-22 19:59:04.514035 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-22 19:59:04.514042 | orchestrator | Sunday 22 June 2025 19:52:09 +0000 (0:00:01.586) 0:04:25.573 *********** 2025-06-22 19:59:04.514047 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.514053 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.514058 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.514064 | orchestrator | 2025-06-22 19:59:04.514069 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-22 19:59:04.514075 | orchestrator | Sunday 22 June 2025 19:52:11 +0000 (0:00:02.029) 0:04:27.603 *********** 2025-06-22 19:59:04.514080 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.514086 | orchestrator | 2025-06-22 19:59:04.514091 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-22 19:59:04.514097 | orchestrator | Sunday 22 June 2025 19:52:12 +0000 (0:00:00.770) 0:04:28.373 *********** 2025-06-22 19:59:04.514102 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-22 19:59:04.514112 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.514117 | orchestrator | 2025-06-22 19:59:04.514123 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-22 19:59:04.514128 | orchestrator | Sunday 22 June 2025 19:52:34 +0000 (0:00:21.776) 0:04:50.149 *********** 2025-06-22 19:59:04.514134 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.514139 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.514145 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.514150 | orchestrator | 2025-06-22 19:59:04.514172 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-22 19:59:04.514178 | orchestrator | Sunday 22 June 2025 19:52:43 +0000 (0:00:09.290) 0:04:59.440 *********** 2025-06-22 19:59:04.514183 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.514189 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.514194 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.514200 | orchestrator | 2025-06-22 19:59:04.514205 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-22 19:59:04.514211 | orchestrator | Sunday 22 June 2025 19:52:43 +0000 (0:00:00.287) 0:04:59.727 *********** 2025-06-22 19:59:04.514227 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cc3c78e7ee82a5c881896c023b7b26c1290d3182'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-22 19:59:04.514235 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cc3c78e7ee82a5c881896c023b7b26c1290d3182'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-22 19:59:04.514242 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cc3c78e7ee82a5c881896c023b7b26c1290d3182'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-22 19:59:04.514249 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cc3c78e7ee82a5c881896c023b7b26c1290d3182'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-22 19:59:04.514259 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cc3c78e7ee82a5c881896c023b7b26c1290d3182'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-22 19:59:04.514267 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cc3c78e7ee82a5c881896c023b7b26c1290d3182'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__cc3c78e7ee82a5c881896c023b7b26c1290d3182'}])  2025-06-22 19:59:04.514275 | orchestrator | 2025-06-22 19:59:04.514280 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 19:59:04.514286 | orchestrator | Sunday 22 June 2025 19:52:58 +0000 (0:00:14.649) 0:05:14.377 *********** 2025-06-22 19:59:04.514295 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.514301 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.514306 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.514311 | orchestrator | 2025-06-22 19:59:04.514317 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-22 19:59:04.514322 | orchestrator | Sunday 22 June 2025 19:52:58 +0000 (0:00:00.376) 0:05:14.753 *********** 2025-06-22 19:59:04.514327 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.514333 | orchestrator | 2025-06-22 19:59:04.514338 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-22 19:59:04.514344 | orchestrator | Sunday 22 June 2025 19:52:59 +0000 (0:00:00.779) 0:05:15.533 *********** 2025-06-22 19:59:04.514349 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.514354 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.514360 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.514365 | orchestrator | 2025-06-22 19:59:04.514370 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-22 19:59:04.514376 | orchestrator | Sunday 22 June 2025 19:52:59 +0000 (0:00:00.321) 0:05:15.854 *********** 2025-06-22 19:59:04.514381 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.514387 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.514392 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.514397 | orchestrator | 2025-06-22 19:59:04.514403 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-22 19:59:04.514408 | orchestrator | Sunday 22 June 2025 19:53:00 +0000 (0:00:00.318) 0:05:16.173 *********** 2025-06-22 19:59:04.514414 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 19:59:04.514434 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 19:59:04.514440 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 19:59:04.514446 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.514451 | orchestrator | 2025-06-22 19:59:04.514456 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-22 19:59:04.514462 | orchestrator | Sunday 22 June 2025 19:53:01 +0000 (0:00:01.003) 0:05:17.176 *********** 2025-06-22 19:59:04.514467 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.514473 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.514478 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.514483 | orchestrator | 2025-06-22 19:59:04.514489 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-22 19:59:04.514494 | orchestrator | 2025-06-22 19:59:04.514500 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 19:59:04.514505 | orchestrator | Sunday 22 June 2025 19:53:01 +0000 (0:00:00.819) 0:05:17.996 *********** 2025-06-22 19:59:04.514510 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.514516 | orchestrator | 2025-06-22 19:59:04.514521 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 19:59:04.514527 | orchestrator | Sunday 22 June 2025 19:53:02 +0000 (0:00:00.511) 0:05:18.507 *********** 2025-06-22 19:59:04.514532 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.514538 | orchestrator | 2025-06-22 19:59:04.514543 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 19:59:04.514549 | orchestrator | Sunday 22 June 2025 19:53:03 +0000 (0:00:00.683) 0:05:19.191 *********** 2025-06-22 19:59:04.514554 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.514559 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.514565 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.514570 | orchestrator | 2025-06-22 19:59:04.514575 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 19:59:04.514581 | orchestrator | Sunday 22 June 2025 19:53:03 +0000 (0:00:00.672) 0:05:19.864 *********** 2025-06-22 19:59:04.514590 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.514596 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.514601 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.514606 | orchestrator | 2025-06-22 19:59:04.514612 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 19:59:04.514617 | orchestrator | Sunday 22 June 2025 19:53:04 +0000 (0:00:00.284) 0:05:20.148 *********** 2025-06-22 19:59:04.514623 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.514628 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.514633 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.514639 | orchestrator | 2025-06-22 19:59:04.514644 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 19:59:04.514653 | orchestrator | Sunday 22 June 2025 19:53:04 +0000 (0:00:00.484) 0:05:20.633 *********** 2025-06-22 19:59:04.514658 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.514663 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.514669 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.514674 | orchestrator | 2025-06-22 19:59:04.514679 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 19:59:04.514686 | orchestrator | Sunday 22 June 2025 19:53:04 +0000 (0:00:00.246) 0:05:20.879 *********** 2025-06-22 19:59:04.514696 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.514705 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.514713 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.514722 | orchestrator | 2025-06-22 19:59:04.514730 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 19:59:04.514739 | orchestrator | Sunday 22 June 2025 19:53:05 +0000 (0:00:00.657) 0:05:21.537 *********** 2025-06-22 19:59:04.514747 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.514754 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.514762 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.514771 | orchestrator | 2025-06-22 19:59:04.514779 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 19:59:04.514789 | orchestrator | Sunday 22 June 2025 19:53:05 +0000 (0:00:00.254) 0:05:21.792 *********** 2025-06-22 19:59:04.514798 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.514806 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.514815 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.514823 | orchestrator | 2025-06-22 19:59:04.514832 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 19:59:04.514843 | orchestrator | Sunday 22 June 2025 19:53:06 +0000 (0:00:00.466) 0:05:22.258 *********** 2025-06-22 19:59:04.514848 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.514854 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.514859 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.514864 | orchestrator | 2025-06-22 19:59:04.514870 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 19:59:04.514875 | orchestrator | Sunday 22 June 2025 19:53:06 +0000 (0:00:00.611) 0:05:22.869 *********** 2025-06-22 19:59:04.514880 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.514886 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.514891 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.514896 | orchestrator | 2025-06-22 19:59:04.514902 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 19:59:04.514907 | orchestrator | Sunday 22 June 2025 19:53:07 +0000 (0:00:00.635) 0:05:23.505 *********** 2025-06-22 19:59:04.514912 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.514918 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.514923 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.514928 | orchestrator | 2025-06-22 19:59:04.514934 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 19:59:04.514939 | orchestrator | Sunday 22 June 2025 19:53:07 +0000 (0:00:00.336) 0:05:23.841 *********** 2025-06-22 19:59:04.514944 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.514955 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.514960 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.514966 | orchestrator | 2025-06-22 19:59:04.514993 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 19:59:04.515000 | orchestrator | Sunday 22 June 2025 19:53:08 +0000 (0:00:00.623) 0:05:24.465 *********** 2025-06-22 19:59:04.515005 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.515011 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.515016 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.515022 | orchestrator | 2025-06-22 19:59:04.515027 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 19:59:04.515033 | orchestrator | Sunday 22 June 2025 19:53:08 +0000 (0:00:00.357) 0:05:24.823 *********** 2025-06-22 19:59:04.515038 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.515044 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.515050 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.515055 | orchestrator | 2025-06-22 19:59:04.515060 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 19:59:04.515066 | orchestrator | Sunday 22 June 2025 19:53:09 +0000 (0:00:00.320) 0:05:25.144 *********** 2025-06-22 19:59:04.515071 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.515077 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.515082 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.515087 | orchestrator | 2025-06-22 19:59:04.515093 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 19:59:04.515098 | orchestrator | Sunday 22 June 2025 19:53:09 +0000 (0:00:00.363) 0:05:25.507 *********** 2025-06-22 19:59:04.515104 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.515109 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.515115 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.515120 | orchestrator | 2025-06-22 19:59:04.515126 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 19:59:04.515131 | orchestrator | Sunday 22 June 2025 19:53:10 +0000 (0:00:00.633) 0:05:26.141 *********** 2025-06-22 19:59:04.515136 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.515142 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.515147 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.515153 | orchestrator | 2025-06-22 19:59:04.515158 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 19:59:04.515163 | orchestrator | Sunday 22 June 2025 19:53:10 +0000 (0:00:00.356) 0:05:26.497 *********** 2025-06-22 19:59:04.515169 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.515174 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.515180 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.515185 | orchestrator | 2025-06-22 19:59:04.515191 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 19:59:04.515196 | orchestrator | Sunday 22 June 2025 19:53:10 +0000 (0:00:00.343) 0:05:26.840 *********** 2025-06-22 19:59:04.515202 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.515233 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.515240 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.515245 | orchestrator | 2025-06-22 19:59:04.515251 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 19:59:04.515264 | orchestrator | Sunday 22 June 2025 19:53:11 +0000 (0:00:00.360) 0:05:27.201 *********** 2025-06-22 19:59:04.515269 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.515275 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.515280 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.515285 | orchestrator | 2025-06-22 19:59:04.515291 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-22 19:59:04.515296 | orchestrator | Sunday 22 June 2025 19:53:11 +0000 (0:00:00.779) 0:05:27.980 *********** 2025-06-22 19:59:04.515302 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 19:59:04.515307 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 19:59:04.515317 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 19:59:04.515323 | orchestrator | 2025-06-22 19:59:04.515328 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-22 19:59:04.515334 | orchestrator | Sunday 22 June 2025 19:53:12 +0000 (0:00:00.616) 0:05:28.596 *********** 2025-06-22 19:59:04.515339 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.515345 | orchestrator | 2025-06-22 19:59:04.515350 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-22 19:59:04.515355 | orchestrator | Sunday 22 June 2025 19:53:13 +0000 (0:00:00.485) 0:05:29.082 *********** 2025-06-22 19:59:04.515361 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.515366 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.515372 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.515377 | orchestrator | 2025-06-22 19:59:04.515383 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-22 19:59:04.515388 | orchestrator | Sunday 22 June 2025 19:53:14 +0000 (0:00:01.042) 0:05:30.125 *********** 2025-06-22 19:59:04.515393 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.515399 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.515404 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.515409 | orchestrator | 2025-06-22 19:59:04.515415 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-22 19:59:04.515420 | orchestrator | Sunday 22 June 2025 19:53:14 +0000 (0:00:00.344) 0:05:30.469 *********** 2025-06-22 19:59:04.515425 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 19:59:04.515431 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 19:59:04.515437 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 19:59:04.515442 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-22 19:59:04.515448 | orchestrator | 2025-06-22 19:59:04.515453 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-22 19:59:04.515458 | orchestrator | Sunday 22 June 2025 19:53:26 +0000 (0:00:12.065) 0:05:42.535 *********** 2025-06-22 19:59:04.515464 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.515469 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.515475 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.515480 | orchestrator | 2025-06-22 19:59:04.515486 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-22 19:59:04.515510 | orchestrator | Sunday 22 June 2025 19:53:26 +0000 (0:00:00.342) 0:05:42.878 *********** 2025-06-22 19:59:04.515516 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-22 19:59:04.515522 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 19:59:04.515527 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 19:59:04.515533 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-22 19:59:04.515538 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.515544 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.515549 | orchestrator | 2025-06-22 19:59:04.515554 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-22 19:59:04.515560 | orchestrator | Sunday 22 June 2025 19:53:29 +0000 (0:00:02.561) 0:05:45.440 *********** 2025-06-22 19:59:04.515565 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-22 19:59:04.515571 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 19:59:04.515577 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 19:59:04.515582 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 19:59:04.515587 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-22 19:59:04.515593 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-22 19:59:04.515598 | orchestrator | 2025-06-22 19:59:04.515604 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-22 19:59:04.515614 | orchestrator | Sunday 22 June 2025 19:53:30 +0000 (0:00:01.358) 0:05:46.798 *********** 2025-06-22 19:59:04.515620 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.515625 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.515631 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.515636 | orchestrator | 2025-06-22 19:59:04.515641 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-22 19:59:04.515647 | orchestrator | Sunday 22 June 2025 19:53:31 +0000 (0:00:00.650) 0:05:47.448 *********** 2025-06-22 19:59:04.515652 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.515657 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.515663 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.515668 | orchestrator | 2025-06-22 19:59:04.515674 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-22 19:59:04.515680 | orchestrator | Sunday 22 June 2025 19:53:31 +0000 (0:00:00.325) 0:05:47.774 *********** 2025-06-22 19:59:04.515685 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.515690 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.515696 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.515701 | orchestrator | 2025-06-22 19:59:04.515707 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-22 19:59:04.515712 | orchestrator | Sunday 22 June 2025 19:53:32 +0000 (0:00:00.256) 0:05:48.031 *********** 2025-06-22 19:59:04.515718 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.515724 | orchestrator | 2025-06-22 19:59:04.515737 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-22 19:59:04.515746 | orchestrator | Sunday 22 June 2025 19:53:32 +0000 (0:00:00.622) 0:05:48.653 *********** 2025-06-22 19:59:04.515755 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.515764 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.515772 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.515781 | orchestrator | 2025-06-22 19:59:04.515790 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-22 19:59:04.515796 | orchestrator | Sunday 22 June 2025 19:53:32 +0000 (0:00:00.297) 0:05:48.951 *********** 2025-06-22 19:59:04.515801 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.515806 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.515812 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.515822 | orchestrator | 2025-06-22 19:59:04.515830 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-22 19:59:04.515839 | orchestrator | Sunday 22 June 2025 19:53:33 +0000 (0:00:00.277) 0:05:49.229 *********** 2025-06-22 19:59:04.515847 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.515857 | orchestrator | 2025-06-22 19:59:04.515864 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-22 19:59:04.515872 | orchestrator | Sunday 22 June 2025 19:53:33 +0000 (0:00:00.621) 0:05:49.850 *********** 2025-06-22 19:59:04.515881 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.515888 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.515897 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.515905 | orchestrator | 2025-06-22 19:59:04.515913 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-22 19:59:04.515922 | orchestrator | Sunday 22 June 2025 19:53:34 +0000 (0:00:00.995) 0:05:50.845 *********** 2025-06-22 19:59:04.515930 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.515939 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.515948 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.515957 | orchestrator | 2025-06-22 19:59:04.515966 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-22 19:59:04.515975 | orchestrator | Sunday 22 June 2025 19:53:35 +0000 (0:00:00.971) 0:05:51.817 *********** 2025-06-22 19:59:04.515990 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.516001 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.516010 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.516019 | orchestrator | 2025-06-22 19:59:04.516029 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-22 19:59:04.516035 | orchestrator | Sunday 22 June 2025 19:53:37 +0000 (0:00:02.076) 0:05:53.894 *********** 2025-06-22 19:59:04.516041 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.516046 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.516052 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.516057 | orchestrator | 2025-06-22 19:59:04.516063 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-22 19:59:04.516068 | orchestrator | Sunday 22 June 2025 19:53:39 +0000 (0:00:01.980) 0:05:55.874 *********** 2025-06-22 19:59:04.516101 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.516107 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.516113 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-22 19:59:04.516119 | orchestrator | 2025-06-22 19:59:04.516124 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-22 19:59:04.516130 | orchestrator | Sunday 22 June 2025 19:53:40 +0000 (0:00:00.405) 0:05:56.280 *********** 2025-06-22 19:59:04.516135 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-22 19:59:04.516141 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-22 19:59:04.516146 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-22 19:59:04.516152 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-22 19:59:04.516157 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-22 19:59:04.516163 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-22 19:59:04.516168 | orchestrator | 2025-06-22 19:59:04.516174 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-22 19:59:04.516179 | orchestrator | Sunday 22 June 2025 19:54:10 +0000 (0:00:30.213) 0:06:26.493 *********** 2025-06-22 19:59:04.516184 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-22 19:59:04.516190 | orchestrator | 2025-06-22 19:59:04.516195 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-22 19:59:04.516200 | orchestrator | Sunday 22 June 2025 19:54:11 +0000 (0:00:01.519) 0:06:28.012 *********** 2025-06-22 19:59:04.516206 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.516211 | orchestrator | 2025-06-22 19:59:04.516259 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-22 19:59:04.516267 | orchestrator | Sunday 22 June 2025 19:54:12 +0000 (0:00:00.886) 0:06:28.898 *********** 2025-06-22 19:59:04.516273 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.516278 | orchestrator | 2025-06-22 19:59:04.516284 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-22 19:59:04.516289 | orchestrator | Sunday 22 June 2025 19:54:13 +0000 (0:00:00.144) 0:06:29.043 *********** 2025-06-22 19:59:04.516295 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-22 19:59:04.516300 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-22 19:59:04.516305 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-22 19:59:04.516311 | orchestrator | 2025-06-22 19:59:04.516321 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-22 19:59:04.516327 | orchestrator | Sunday 22 June 2025 19:54:19 +0000 (0:00:06.861) 0:06:35.904 *********** 2025-06-22 19:59:04.516332 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-22 19:59:04.516342 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-22 19:59:04.516348 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-22 19:59:04.516353 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-22 19:59:04.516359 | orchestrator | 2025-06-22 19:59:04.516364 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 19:59:04.516370 | orchestrator | Sunday 22 June 2025 19:54:24 +0000 (0:00:04.815) 0:06:40.720 *********** 2025-06-22 19:59:04.516375 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.516381 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.516386 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.516391 | orchestrator | 2025-06-22 19:59:04.516397 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-22 19:59:04.516402 | orchestrator | Sunday 22 June 2025 19:54:25 +0000 (0:00:00.785) 0:06:41.506 *********** 2025-06-22 19:59:04.516407 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.516413 | orchestrator | 2025-06-22 19:59:04.516418 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-22 19:59:04.516424 | orchestrator | Sunday 22 June 2025 19:54:25 +0000 (0:00:00.484) 0:06:41.990 *********** 2025-06-22 19:59:04.516429 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.516434 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.516440 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.516445 | orchestrator | 2025-06-22 19:59:04.516451 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-22 19:59:04.516456 | orchestrator | Sunday 22 June 2025 19:54:26 +0000 (0:00:00.292) 0:06:42.282 *********** 2025-06-22 19:59:04.516462 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.516467 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.516472 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.516478 | orchestrator | 2025-06-22 19:59:04.516483 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-22 19:59:04.516488 | orchestrator | Sunday 22 June 2025 19:54:27 +0000 (0:00:01.496) 0:06:43.778 *********** 2025-06-22 19:59:04.516494 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 19:59:04.516499 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 19:59:04.516505 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 19:59:04.516510 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.516515 | orchestrator | 2025-06-22 19:59:04.516521 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-22 19:59:04.516526 | orchestrator | Sunday 22 June 2025 19:54:28 +0000 (0:00:00.513) 0:06:44.292 *********** 2025-06-22 19:59:04.516551 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.516557 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.516563 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.516568 | orchestrator | 2025-06-22 19:59:04.516573 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-22 19:59:04.516579 | orchestrator | 2025-06-22 19:59:04.516584 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 19:59:04.516590 | orchestrator | Sunday 22 June 2025 19:54:28 +0000 (0:00:00.527) 0:06:44.819 *********** 2025-06-22 19:59:04.516595 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.516601 | orchestrator | 2025-06-22 19:59:04.516607 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 19:59:04.516612 | orchestrator | Sunday 22 June 2025 19:54:29 +0000 (0:00:00.585) 0:06:45.405 *********** 2025-06-22 19:59:04.516617 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.516623 | orchestrator | 2025-06-22 19:59:04.516632 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 19:59:04.516638 | orchestrator | Sunday 22 June 2025 19:54:29 +0000 (0:00:00.474) 0:06:45.880 *********** 2025-06-22 19:59:04.516643 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.516649 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.516654 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.516659 | orchestrator | 2025-06-22 19:59:04.516664 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 19:59:04.516669 | orchestrator | Sunday 22 June 2025 19:54:30 +0000 (0:00:00.259) 0:06:46.140 *********** 2025-06-22 19:59:04.516673 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.516678 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.516683 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.516688 | orchestrator | 2025-06-22 19:59:04.516693 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 19:59:04.516698 | orchestrator | Sunday 22 June 2025 19:54:30 +0000 (0:00:00.816) 0:06:46.956 *********** 2025-06-22 19:59:04.516702 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.516707 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.516712 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.516717 | orchestrator | 2025-06-22 19:59:04.516722 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 19:59:04.516727 | orchestrator | Sunday 22 June 2025 19:54:31 +0000 (0:00:00.628) 0:06:47.585 *********** 2025-06-22 19:59:04.516731 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.516736 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.516741 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.516746 | orchestrator | 2025-06-22 19:59:04.516751 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 19:59:04.516756 | orchestrator | Sunday 22 June 2025 19:54:32 +0000 (0:00:00.725) 0:06:48.310 *********** 2025-06-22 19:59:04.516764 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.516768 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.516773 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.516778 | orchestrator | 2025-06-22 19:59:04.516783 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 19:59:04.516788 | orchestrator | Sunday 22 June 2025 19:54:32 +0000 (0:00:00.261) 0:06:48.572 *********** 2025-06-22 19:59:04.516793 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.516798 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.516802 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.516807 | orchestrator | 2025-06-22 19:59:04.516812 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 19:59:04.516817 | orchestrator | Sunday 22 June 2025 19:54:32 +0000 (0:00:00.442) 0:06:49.014 *********** 2025-06-22 19:59:04.516822 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.516826 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.516831 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.516836 | orchestrator | 2025-06-22 19:59:04.516841 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 19:59:04.516846 | orchestrator | Sunday 22 June 2025 19:54:33 +0000 (0:00:00.293) 0:06:49.307 *********** 2025-06-22 19:59:04.516850 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.516855 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.516860 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.516865 | orchestrator | 2025-06-22 19:59:04.516870 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 19:59:04.516879 | orchestrator | Sunday 22 June 2025 19:54:33 +0000 (0:00:00.630) 0:06:49.938 *********** 2025-06-22 19:59:04.516886 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.516893 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.516900 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.516908 | orchestrator | 2025-06-22 19:59:04.516917 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 19:59:04.516931 | orchestrator | Sunday 22 June 2025 19:54:34 +0000 (0:00:00.673) 0:06:50.611 *********** 2025-06-22 19:59:04.516938 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.516943 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.516948 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.516953 | orchestrator | 2025-06-22 19:59:04.516958 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 19:59:04.516962 | orchestrator | Sunday 22 June 2025 19:54:35 +0000 (0:00:00.739) 0:06:51.351 *********** 2025-06-22 19:59:04.516967 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.516972 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.516977 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.516982 | orchestrator | 2025-06-22 19:59:04.516987 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 19:59:04.516992 | orchestrator | Sunday 22 June 2025 19:54:35 +0000 (0:00:00.360) 0:06:51.712 *********** 2025-06-22 19:59:04.516996 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.517001 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.517006 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.517011 | orchestrator | 2025-06-22 19:59:04.517016 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 19:59:04.517023 | orchestrator | Sunday 22 June 2025 19:54:36 +0000 (0:00:00.452) 0:06:52.164 *********** 2025-06-22 19:59:04.517028 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.517033 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.517038 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.517042 | orchestrator | 2025-06-22 19:59:04.517047 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 19:59:04.517052 | orchestrator | Sunday 22 June 2025 19:54:36 +0000 (0:00:00.391) 0:06:52.556 *********** 2025-06-22 19:59:04.517057 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.517062 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.517066 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.517071 | orchestrator | 2025-06-22 19:59:04.517076 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 19:59:04.517081 | orchestrator | Sunday 22 June 2025 19:54:37 +0000 (0:00:00.743) 0:06:53.300 *********** 2025-06-22 19:59:04.517086 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.517091 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.517096 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.517100 | orchestrator | 2025-06-22 19:59:04.517105 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 19:59:04.517110 | orchestrator | Sunday 22 June 2025 19:54:37 +0000 (0:00:00.395) 0:06:53.696 *********** 2025-06-22 19:59:04.517115 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.517120 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.517124 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.517129 | orchestrator | 2025-06-22 19:59:04.517134 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 19:59:04.517139 | orchestrator | Sunday 22 June 2025 19:54:38 +0000 (0:00:00.355) 0:06:54.051 *********** 2025-06-22 19:59:04.517144 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.517148 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.517153 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.517158 | orchestrator | 2025-06-22 19:59:04.517162 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 19:59:04.517167 | orchestrator | Sunday 22 June 2025 19:54:38 +0000 (0:00:00.345) 0:06:54.396 *********** 2025-06-22 19:59:04.517172 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.517177 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.517182 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.517186 | orchestrator | 2025-06-22 19:59:04.517191 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 19:59:04.517196 | orchestrator | Sunday 22 June 2025 19:54:39 +0000 (0:00:00.748) 0:06:55.145 *********** 2025-06-22 19:59:04.517204 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.517209 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.517214 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.517271 | orchestrator | 2025-06-22 19:59:04.517276 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-22 19:59:04.517281 | orchestrator | Sunday 22 June 2025 19:54:39 +0000 (0:00:00.651) 0:06:55.796 *********** 2025-06-22 19:59:04.517286 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.517291 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.517298 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.517303 | orchestrator | 2025-06-22 19:59:04.517308 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-22 19:59:04.517313 | orchestrator | Sunday 22 June 2025 19:54:40 +0000 (0:00:00.331) 0:06:56.128 *********** 2025-06-22 19:59:04.517318 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 19:59:04.517323 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 19:59:04.517328 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 19:59:04.517332 | orchestrator | 2025-06-22 19:59:04.517337 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-22 19:59:04.517342 | orchestrator | Sunday 22 June 2025 19:54:41 +0000 (0:00:00.967) 0:06:57.095 *********** 2025-06-22 19:59:04.517347 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.517352 | orchestrator | 2025-06-22 19:59:04.517357 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-22 19:59:04.517362 | orchestrator | Sunday 22 June 2025 19:54:42 +0000 (0:00:00.968) 0:06:58.064 *********** 2025-06-22 19:59:04.517366 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.517371 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.517376 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.517381 | orchestrator | 2025-06-22 19:59:04.517385 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-22 19:59:04.517390 | orchestrator | Sunday 22 June 2025 19:54:42 +0000 (0:00:00.321) 0:06:58.386 *********** 2025-06-22 19:59:04.517395 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.517400 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.517404 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.517409 | orchestrator | 2025-06-22 19:59:04.517414 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-22 19:59:04.517419 | orchestrator | Sunday 22 June 2025 19:54:42 +0000 (0:00:00.323) 0:06:58.710 *********** 2025-06-22 19:59:04.517424 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.517429 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.517433 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.517438 | orchestrator | 2025-06-22 19:59:04.517443 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-22 19:59:04.517448 | orchestrator | Sunday 22 June 2025 19:54:43 +0000 (0:00:00.884) 0:06:59.594 *********** 2025-06-22 19:59:04.517453 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.517457 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.517462 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.517467 | orchestrator | 2025-06-22 19:59:04.517471 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-22 19:59:04.517476 | orchestrator | Sunday 22 June 2025 19:54:43 +0000 (0:00:00.296) 0:06:59.890 *********** 2025-06-22 19:59:04.517481 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-22 19:59:04.517492 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-22 19:59:04.517497 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-22 19:59:04.517502 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-22 19:59:04.517511 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-22 19:59:04.517516 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-22 19:59:04.517520 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-22 19:59:04.517525 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-22 19:59:04.517530 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-22 19:59:04.517535 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-22 19:59:04.517539 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-22 19:59:04.517544 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-22 19:59:04.517549 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-22 19:59:04.517554 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-22 19:59:04.517558 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-22 19:59:04.517563 | orchestrator | 2025-06-22 19:59:04.517568 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-22 19:59:04.517573 | orchestrator | Sunday 22 June 2025 19:54:46 +0000 (0:00:02.850) 0:07:02.741 *********** 2025-06-22 19:59:04.517578 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.517582 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.517587 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.517592 | orchestrator | 2025-06-22 19:59:04.517597 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-22 19:59:04.517602 | orchestrator | Sunday 22 June 2025 19:54:47 +0000 (0:00:00.298) 0:07:03.039 *********** 2025-06-22 19:59:04.517606 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.517611 | orchestrator | 2025-06-22 19:59:04.517616 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-22 19:59:04.517621 | orchestrator | Sunday 22 June 2025 19:54:47 +0000 (0:00:00.790) 0:07:03.829 *********** 2025-06-22 19:59:04.517628 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-22 19:59:04.517633 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-22 19:59:04.517638 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-22 19:59:04.517643 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-22 19:59:04.517648 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-22 19:59:04.517652 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-22 19:59:04.517657 | orchestrator | 2025-06-22 19:59:04.517662 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-22 19:59:04.517667 | orchestrator | Sunday 22 June 2025 19:54:48 +0000 (0:00:00.922) 0:07:04.752 *********** 2025-06-22 19:59:04.517672 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.517677 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 19:59:04.517681 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 19:59:04.517686 | orchestrator | 2025-06-22 19:59:04.517691 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-22 19:59:04.517696 | orchestrator | Sunday 22 June 2025 19:54:50 +0000 (0:00:01.977) 0:07:06.729 *********** 2025-06-22 19:59:04.517701 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 19:59:04.517705 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 19:59:04.517710 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.517715 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 19:59:04.517723 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-22 19:59:04.517728 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.517733 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 19:59:04.517738 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-22 19:59:04.517743 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.517747 | orchestrator | 2025-06-22 19:59:04.517752 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-22 19:59:04.517757 | orchestrator | Sunday 22 June 2025 19:54:52 +0000 (0:00:01.300) 0:07:08.029 *********** 2025-06-22 19:59:04.517762 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 19:59:04.517766 | orchestrator | 2025-06-22 19:59:04.517771 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-22 19:59:04.517776 | orchestrator | Sunday 22 June 2025 19:54:54 +0000 (0:00:02.064) 0:07:10.094 *********** 2025-06-22 19:59:04.517781 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.517786 | orchestrator | 2025-06-22 19:59:04.517790 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-22 19:59:04.517795 | orchestrator | Sunday 22 June 2025 19:54:54 +0000 (0:00:00.492) 0:07:10.587 *********** 2025-06-22 19:59:04.517803 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b7d3102c-a914-5a7b-b709-ad20b0d5984a', 'data_vg': 'ceph-b7d3102c-a914-5a7b-b709-ad20b0d5984a'}) 2025-06-22 19:59:04.517809 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9f4df137-04dd-5f0e-acd7-f62ec38375b4', 'data_vg': 'ceph-9f4df137-04dd-5f0e-acd7-f62ec38375b4'}) 2025-06-22 19:59:04.517814 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-26b627d5-c9a2-5c9e-a2df-a450422a30c2', 'data_vg': 'ceph-26b627d5-c9a2-5c9e-a2df-a450422a30c2'}) 2025-06-22 19:59:04.517819 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f64325fb-298e-5c24-b96e-fd5d866c56eb', 'data_vg': 'ceph-f64325fb-298e-5c24-b96e-fd5d866c56eb'}) 2025-06-22 19:59:04.517824 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0c557b89-2e3b-5795-aff3-9e4ccad52f24', 'data_vg': 'ceph-0c557b89-2e3b-5795-aff3-9e4ccad52f24'}) 2025-06-22 19:59:04.517829 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5c0aa592-9340-5775-8ceb-7aef1759a79b', 'data_vg': 'ceph-5c0aa592-9340-5775-8ceb-7aef1759a79b'}) 2025-06-22 19:59:04.517834 | orchestrator | 2025-06-22 19:59:04.517839 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-22 19:59:04.517844 | orchestrator | Sunday 22 June 2025 19:55:41 +0000 (0:00:47.400) 0:07:57.987 *********** 2025-06-22 19:59:04.517848 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.517853 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.517858 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.517863 | orchestrator | 2025-06-22 19:59:04.517867 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-22 19:59:04.517872 | orchestrator | Sunday 22 June 2025 19:55:42 +0000 (0:00:00.586) 0:07:58.573 *********** 2025-06-22 19:59:04.517877 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.517882 | orchestrator | 2025-06-22 19:59:04.517887 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-22 19:59:04.517892 | orchestrator | Sunday 22 June 2025 19:55:43 +0000 (0:00:00.672) 0:07:59.245 *********** 2025-06-22 19:59:04.517897 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.517901 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.517906 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.517911 | orchestrator | 2025-06-22 19:59:04.517916 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-22 19:59:04.517920 | orchestrator | Sunday 22 June 2025 19:55:43 +0000 (0:00:00.607) 0:07:59.852 *********** 2025-06-22 19:59:04.517931 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.517936 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.517940 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.517945 | orchestrator | 2025-06-22 19:59:04.517953 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-22 19:59:04.517958 | orchestrator | Sunday 22 June 2025 19:55:47 +0000 (0:00:03.384) 0:08:03.237 *********** 2025-06-22 19:59:04.517963 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.517967 | orchestrator | 2025-06-22 19:59:04.517972 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-22 19:59:04.517977 | orchestrator | Sunday 22 June 2025 19:55:47 +0000 (0:00:00.579) 0:08:03.816 *********** 2025-06-22 19:59:04.517982 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.517987 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.517991 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.517996 | orchestrator | 2025-06-22 19:59:04.518001 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-22 19:59:04.518006 | orchestrator | Sunday 22 June 2025 19:55:48 +0000 (0:00:01.164) 0:08:04.981 *********** 2025-06-22 19:59:04.518010 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.518037 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.518042 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.518047 | orchestrator | 2025-06-22 19:59:04.518052 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-22 19:59:04.518057 | orchestrator | Sunday 22 June 2025 19:55:50 +0000 (0:00:01.294) 0:08:06.276 *********** 2025-06-22 19:59:04.518061 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.518066 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.518071 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.518076 | orchestrator | 2025-06-22 19:59:04.518081 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-22 19:59:04.518085 | orchestrator | Sunday 22 June 2025 19:55:51 +0000 (0:00:01.717) 0:08:07.993 *********** 2025-06-22 19:59:04.518090 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518095 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.518100 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.518104 | orchestrator | 2025-06-22 19:59:04.518109 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-22 19:59:04.518114 | orchestrator | Sunday 22 June 2025 19:55:52 +0000 (0:00:00.356) 0:08:08.350 *********** 2025-06-22 19:59:04.518119 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518124 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.518128 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.518133 | orchestrator | 2025-06-22 19:59:04.518138 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-22 19:59:04.518143 | orchestrator | Sunday 22 June 2025 19:55:52 +0000 (0:00:00.375) 0:08:08.725 *********** 2025-06-22 19:59:04.518148 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 19:59:04.518152 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-22 19:59:04.518157 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-22 19:59:04.518162 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-06-22 19:59:04.518167 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-06-22 19:59:04.518172 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-06-22 19:59:04.518176 | orchestrator | 2025-06-22 19:59:04.518184 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-22 19:59:04.518190 | orchestrator | Sunday 22 June 2025 19:55:54 +0000 (0:00:01.415) 0:08:10.140 *********** 2025-06-22 19:59:04.518194 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-22 19:59:04.518199 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-22 19:59:04.518204 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-22 19:59:04.518209 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-22 19:59:04.518232 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-22 19:59:04.518240 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-22 19:59:04.518245 | orchestrator | 2025-06-22 19:59:04.518250 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-22 19:59:04.518255 | orchestrator | Sunday 22 June 2025 19:55:56 +0000 (0:00:02.065) 0:08:12.206 *********** 2025-06-22 19:59:04.518259 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-22 19:59:04.518264 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-22 19:59:04.518269 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-22 19:59:04.518274 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-22 19:59:04.518278 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-22 19:59:04.518283 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-22 19:59:04.518288 | orchestrator | 2025-06-22 19:59:04.518293 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-22 19:59:04.518297 | orchestrator | Sunday 22 June 2025 19:55:59 +0000 (0:00:03.292) 0:08:15.499 *********** 2025-06-22 19:59:04.518302 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518307 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.518312 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-22 19:59:04.518316 | orchestrator | 2025-06-22 19:59:04.518321 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-22 19:59:04.518326 | orchestrator | Sunday 22 June 2025 19:56:02 +0000 (0:00:03.183) 0:08:18.682 *********** 2025-06-22 19:59:04.518331 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518335 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.518340 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-22 19:59:04.518345 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-22 19:59:04.518350 | orchestrator | 2025-06-22 19:59:04.518355 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-22 19:59:04.518359 | orchestrator | Sunday 22 June 2025 19:56:15 +0000 (0:00:13.275) 0:08:31.957 *********** 2025-06-22 19:59:04.518364 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518369 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.518373 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.518378 | orchestrator | 2025-06-22 19:59:04.518383 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 19:59:04.518388 | orchestrator | Sunday 22 June 2025 19:56:16 +0000 (0:00:00.845) 0:08:32.803 *********** 2025-06-22 19:59:04.518393 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518398 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.518403 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.518407 | orchestrator | 2025-06-22 19:59:04.518413 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-22 19:59:04.518417 | orchestrator | Sunday 22 June 2025 19:56:17 +0000 (0:00:00.835) 0:08:33.639 *********** 2025-06-22 19:59:04.518422 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.518427 | orchestrator | 2025-06-22 19:59:04.518432 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-22 19:59:04.518437 | orchestrator | Sunday 22 June 2025 19:56:18 +0000 (0:00:00.588) 0:08:34.227 *********** 2025-06-22 19:59:04.518441 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.518446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.518451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.518456 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518460 | orchestrator | 2025-06-22 19:59:04.518465 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-22 19:59:04.518470 | orchestrator | Sunday 22 June 2025 19:56:18 +0000 (0:00:00.426) 0:08:34.653 *********** 2025-06-22 19:59:04.518479 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518483 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.518488 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.518493 | orchestrator | 2025-06-22 19:59:04.518552 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-22 19:59:04.518572 | orchestrator | Sunday 22 June 2025 19:56:18 +0000 (0:00:00.321) 0:08:34.975 *********** 2025-06-22 19:59:04.518577 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518582 | orchestrator | 2025-06-22 19:59:04.518587 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-22 19:59:04.518591 | orchestrator | Sunday 22 June 2025 19:56:19 +0000 (0:00:00.240) 0:08:35.216 *********** 2025-06-22 19:59:04.518596 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518601 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.518606 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.518610 | orchestrator | 2025-06-22 19:59:04.518615 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-22 19:59:04.518620 | orchestrator | Sunday 22 June 2025 19:56:19 +0000 (0:00:00.692) 0:08:35.909 *********** 2025-06-22 19:59:04.518625 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518629 | orchestrator | 2025-06-22 19:59:04.518634 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-22 19:59:04.518639 | orchestrator | Sunday 22 June 2025 19:56:20 +0000 (0:00:00.238) 0:08:36.147 *********** 2025-06-22 19:59:04.518644 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518648 | orchestrator | 2025-06-22 19:59:04.518653 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-22 19:59:04.518663 | orchestrator | Sunday 22 June 2025 19:56:20 +0000 (0:00:00.234) 0:08:36.381 *********** 2025-06-22 19:59:04.518668 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518673 | orchestrator | 2025-06-22 19:59:04.518677 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-22 19:59:04.518682 | orchestrator | Sunday 22 June 2025 19:56:20 +0000 (0:00:00.135) 0:08:36.517 *********** 2025-06-22 19:59:04.518687 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518691 | orchestrator | 2025-06-22 19:59:04.518696 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-22 19:59:04.518701 | orchestrator | Sunday 22 June 2025 19:56:20 +0000 (0:00:00.227) 0:08:36.745 *********** 2025-06-22 19:59:04.518706 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518711 | orchestrator | 2025-06-22 19:59:04.518715 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-22 19:59:04.518720 | orchestrator | Sunday 22 June 2025 19:56:20 +0000 (0:00:00.250) 0:08:36.996 *********** 2025-06-22 19:59:04.518725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.518729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.518734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.518739 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518744 | orchestrator | 2025-06-22 19:59:04.518748 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-22 19:59:04.518753 | orchestrator | Sunday 22 June 2025 19:56:21 +0000 (0:00:00.401) 0:08:37.397 *********** 2025-06-22 19:59:04.518758 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518763 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.518767 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.518772 | orchestrator | 2025-06-22 19:59:04.518777 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-22 19:59:04.518782 | orchestrator | Sunday 22 June 2025 19:56:21 +0000 (0:00:00.341) 0:08:37.738 *********** 2025-06-22 19:59:04.518787 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518792 | orchestrator | 2025-06-22 19:59:04.518796 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-22 19:59:04.518805 | orchestrator | Sunday 22 June 2025 19:56:22 +0000 (0:00:00.932) 0:08:38.670 *********** 2025-06-22 19:59:04.518810 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518815 | orchestrator | 2025-06-22 19:59:04.518819 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-22 19:59:04.518824 | orchestrator | 2025-06-22 19:59:04.518829 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 19:59:04.518834 | orchestrator | Sunday 22 June 2025 19:56:23 +0000 (0:00:00.658) 0:08:39.328 *********** 2025-06-22 19:59:04.518839 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.518844 | orchestrator | 2025-06-22 19:59:04.518853 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 19:59:04.518857 | orchestrator | Sunday 22 June 2025 19:56:24 +0000 (0:00:01.254) 0:08:40.583 *********** 2025-06-22 19:59:04.518862 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.518867 | orchestrator | 2025-06-22 19:59:04.518872 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 19:59:04.518877 | orchestrator | Sunday 22 June 2025 19:56:26 +0000 (0:00:01.455) 0:08:42.038 *********** 2025-06-22 19:59:04.518881 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.518886 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.518891 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.518895 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.518900 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.518905 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.518910 | orchestrator | 2025-06-22 19:59:04.518915 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 19:59:04.518919 | orchestrator | Sunday 22 June 2025 19:56:27 +0000 (0:00:01.435) 0:08:43.475 *********** 2025-06-22 19:59:04.518924 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.518929 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.518934 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.518938 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.518943 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.518948 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.518953 | orchestrator | 2025-06-22 19:59:04.518957 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 19:59:04.518962 | orchestrator | Sunday 22 June 2025 19:56:28 +0000 (0:00:00.724) 0:08:44.200 *********** 2025-06-22 19:59:04.518967 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.518972 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.518977 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.518981 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.518986 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.518991 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.518995 | orchestrator | 2025-06-22 19:59:04.519000 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 19:59:04.519005 | orchestrator | Sunday 22 June 2025 19:56:29 +0000 (0:00:01.111) 0:08:45.311 *********** 2025-06-22 19:59:04.519010 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.519014 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.519019 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.519024 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.519029 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.519033 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.519038 | orchestrator | 2025-06-22 19:59:04.519043 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 19:59:04.519048 | orchestrator | Sunday 22 June 2025 19:56:30 +0000 (0:00:00.756) 0:08:46.068 *********** 2025-06-22 19:59:04.519053 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.519057 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.519069 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.519074 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.519078 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.519083 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.519088 | orchestrator | 2025-06-22 19:59:04.519093 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 19:59:04.519097 | orchestrator | Sunday 22 June 2025 19:56:31 +0000 (0:00:01.448) 0:08:47.517 *********** 2025-06-22 19:59:04.519102 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.519107 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.519112 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.519116 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.519121 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.519126 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.519131 | orchestrator | 2025-06-22 19:59:04.519135 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 19:59:04.519140 | orchestrator | Sunday 22 June 2025 19:56:32 +0000 (0:00:00.595) 0:08:48.112 *********** 2025-06-22 19:59:04.519145 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.519150 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.519154 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.519159 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.519164 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.519169 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.519173 | orchestrator | 2025-06-22 19:59:04.519178 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 19:59:04.519183 | orchestrator | Sunday 22 June 2025 19:56:32 +0000 (0:00:00.802) 0:08:48.915 *********** 2025-06-22 19:59:04.519188 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.519193 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.519197 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.519202 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.519207 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.519212 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.519228 | orchestrator | 2025-06-22 19:59:04.519234 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 19:59:04.519239 | orchestrator | Sunday 22 June 2025 19:56:34 +0000 (0:00:01.144) 0:08:50.059 *********** 2025-06-22 19:59:04.519244 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.519248 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.519253 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.519258 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.519263 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.519267 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.519272 | orchestrator | 2025-06-22 19:59:04.519277 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 19:59:04.519282 | orchestrator | Sunday 22 June 2025 19:56:35 +0000 (0:00:01.672) 0:08:51.731 *********** 2025-06-22 19:59:04.519287 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.519292 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.519296 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.519301 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.519306 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.519311 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.519316 | orchestrator | 2025-06-22 19:59:04.519324 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 19:59:04.519329 | orchestrator | Sunday 22 June 2025 19:56:36 +0000 (0:00:00.720) 0:08:52.451 *********** 2025-06-22 19:59:04.519333 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.519338 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.519343 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.519348 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.519353 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.519357 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.519366 | orchestrator | 2025-06-22 19:59:04.519371 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 19:59:04.519376 | orchestrator | Sunday 22 June 2025 19:56:37 +0000 (0:00:01.021) 0:08:53.473 *********** 2025-06-22 19:59:04.519381 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.519385 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.519390 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.519395 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.519400 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.519405 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.519409 | orchestrator | 2025-06-22 19:59:04.519414 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 19:59:04.519419 | orchestrator | Sunday 22 June 2025 19:56:38 +0000 (0:00:00.627) 0:08:54.101 *********** 2025-06-22 19:59:04.519424 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.519429 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.519434 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.519438 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.519445 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.519452 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.519460 | orchestrator | 2025-06-22 19:59:04.519467 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 19:59:04.519474 | orchestrator | Sunday 22 June 2025 19:56:38 +0000 (0:00:00.841) 0:08:54.942 *********** 2025-06-22 19:59:04.519482 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.519489 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.519496 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.519505 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.519510 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.519515 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.519519 | orchestrator | 2025-06-22 19:59:04.519524 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 19:59:04.519529 | orchestrator | Sunday 22 June 2025 19:56:39 +0000 (0:00:00.717) 0:08:55.660 *********** 2025-06-22 19:59:04.519534 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.519538 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.519543 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.519548 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.519552 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.519557 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.519562 | orchestrator | 2025-06-22 19:59:04.519566 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 19:59:04.519571 | orchestrator | Sunday 22 June 2025 19:56:40 +0000 (0:00:01.196) 0:08:56.856 *********** 2025-06-22 19:59:04.519576 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.519584 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.519590 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.519594 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:04.519599 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:04.519604 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:04.519608 | orchestrator | 2025-06-22 19:59:04.519613 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 19:59:04.519618 | orchestrator | Sunday 22 June 2025 19:56:41 +0000 (0:00:00.735) 0:08:57.592 *********** 2025-06-22 19:59:04.519623 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.519628 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.519632 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.519637 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.519642 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.519647 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.519652 | orchestrator | 2025-06-22 19:59:04.519656 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 19:59:04.519661 | orchestrator | Sunday 22 June 2025 19:56:42 +0000 (0:00:01.086) 0:08:58.678 *********** 2025-06-22 19:59:04.519670 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.519675 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.519680 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.519685 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.519689 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.519694 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.519699 | orchestrator | 2025-06-22 19:59:04.519704 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 19:59:04.519709 | orchestrator | Sunday 22 June 2025 19:56:43 +0000 (0:00:00.741) 0:08:59.419 *********** 2025-06-22 19:59:04.519713 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.519718 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.519723 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.519727 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.519732 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.519737 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.519742 | orchestrator | 2025-06-22 19:59:04.519746 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-22 19:59:04.519751 | orchestrator | Sunday 22 June 2025 19:56:45 +0000 (0:00:01.614) 0:09:01.034 *********** 2025-06-22 19:59:04.519756 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 19:59:04.519761 | orchestrator | 2025-06-22 19:59:04.519766 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-22 19:59:04.519770 | orchestrator | Sunday 22 June 2025 19:56:48 +0000 (0:00:03.866) 0:09:04.901 *********** 2025-06-22 19:59:04.519775 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 19:59:04.519780 | orchestrator | 2025-06-22 19:59:04.519785 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-22 19:59:04.519790 | orchestrator | Sunday 22 June 2025 19:56:50 +0000 (0:00:02.026) 0:09:06.928 *********** 2025-06-22 19:59:04.519794 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.519799 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.519804 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.519812 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.519817 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.519821 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.519826 | orchestrator | 2025-06-22 19:59:04.519831 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-22 19:59:04.519836 | orchestrator | Sunday 22 June 2025 19:56:52 +0000 (0:00:02.046) 0:09:08.975 *********** 2025-06-22 19:59:04.519840 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.519845 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.519850 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.519855 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.519859 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.519864 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.519869 | orchestrator | 2025-06-22 19:59:04.519874 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-22 19:59:04.519879 | orchestrator | Sunday 22 June 2025 19:56:54 +0000 (0:00:01.211) 0:09:10.186 *********** 2025-06-22 19:59:04.519883 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.519889 | orchestrator | 2025-06-22 19:59:04.519894 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-22 19:59:04.519899 | orchestrator | Sunday 22 June 2025 19:56:55 +0000 (0:00:01.328) 0:09:11.514 *********** 2025-06-22 19:59:04.519903 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.519908 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.519913 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.519918 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.519922 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.519927 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.519932 | orchestrator | 2025-06-22 19:59:04.519941 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-22 19:59:04.519946 | orchestrator | Sunday 22 June 2025 19:56:57 +0000 (0:00:01.788) 0:09:13.302 *********** 2025-06-22 19:59:04.519950 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.519955 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.519960 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.519965 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.519969 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.519974 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.519979 | orchestrator | 2025-06-22 19:59:04.519984 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-22 19:59:04.519988 | orchestrator | Sunday 22 June 2025 19:57:00 +0000 (0:00:02.873) 0:09:16.176 *********** 2025-06-22 19:59:04.519994 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:04.519998 | orchestrator | 2025-06-22 19:59:04.520003 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-22 19:59:04.520008 | orchestrator | Sunday 22 June 2025 19:57:01 +0000 (0:00:01.406) 0:09:17.583 *********** 2025-06-22 19:59:04.520013 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.520020 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.520025 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.520030 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.520035 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.520040 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.520044 | orchestrator | 2025-06-22 19:59:04.520049 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-22 19:59:04.520054 | orchestrator | Sunday 22 June 2025 19:57:02 +0000 (0:00:01.257) 0:09:18.840 *********** 2025-06-22 19:59:04.520059 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.520064 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.520069 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.520073 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:04.520078 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:04.520083 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:04.520088 | orchestrator | 2025-06-22 19:59:04.520092 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-22 19:59:04.520097 | orchestrator | Sunday 22 June 2025 19:57:05 +0000 (0:00:02.941) 0:09:21.782 *********** 2025-06-22 19:59:04.520102 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.520107 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.520112 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.520116 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:04.520121 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:04.520126 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:04.520130 | orchestrator | 2025-06-22 19:59:04.520135 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-22 19:59:04.520140 | orchestrator | 2025-06-22 19:59:04.520145 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 19:59:04.520150 | orchestrator | Sunday 22 June 2025 19:57:06 +0000 (0:00:01.225) 0:09:23.007 *********** 2025-06-22 19:59:04.520154 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.520159 | orchestrator | 2025-06-22 19:59:04.520164 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 19:59:04.520169 | orchestrator | Sunday 22 June 2025 19:57:07 +0000 (0:00:00.511) 0:09:23.518 *********** 2025-06-22 19:59:04.520174 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.520179 | orchestrator | 2025-06-22 19:59:04.520183 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 19:59:04.520188 | orchestrator | Sunday 22 June 2025 19:57:08 +0000 (0:00:00.709) 0:09:24.228 *********** 2025-06-22 19:59:04.520196 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.520201 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.520206 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.520211 | orchestrator | 2025-06-22 19:59:04.520216 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 19:59:04.520260 | orchestrator | Sunday 22 June 2025 19:57:08 +0000 (0:00:00.298) 0:09:24.526 *********** 2025-06-22 19:59:04.520265 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.520270 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.520275 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.520280 | orchestrator | 2025-06-22 19:59:04.520284 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 19:59:04.520289 | orchestrator | Sunday 22 June 2025 19:57:09 +0000 (0:00:00.760) 0:09:25.287 *********** 2025-06-22 19:59:04.520294 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.520299 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.520303 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.520308 | orchestrator | 2025-06-22 19:59:04.520313 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 19:59:04.520318 | orchestrator | Sunday 22 June 2025 19:57:10 +0000 (0:00:01.224) 0:09:26.512 *********** 2025-06-22 19:59:04.520322 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.520327 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.520332 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.520337 | orchestrator | 2025-06-22 19:59:04.520341 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 19:59:04.520346 | orchestrator | Sunday 22 June 2025 19:57:11 +0000 (0:00:00.670) 0:09:27.182 *********** 2025-06-22 19:59:04.520351 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.520356 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.520361 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.520365 | orchestrator | 2025-06-22 19:59:04.520370 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 19:59:04.520375 | orchestrator | Sunday 22 June 2025 19:57:11 +0000 (0:00:00.297) 0:09:27.480 *********** 2025-06-22 19:59:04.520380 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.520384 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.520389 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.520394 | orchestrator | 2025-06-22 19:59:04.520399 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 19:59:04.520403 | orchestrator | Sunday 22 June 2025 19:57:11 +0000 (0:00:00.262) 0:09:27.743 *********** 2025-06-22 19:59:04.520408 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.520413 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.520418 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.520422 | orchestrator | 2025-06-22 19:59:04.520427 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 19:59:04.520432 | orchestrator | Sunday 22 June 2025 19:57:12 +0000 (0:00:00.418) 0:09:28.161 *********** 2025-06-22 19:59:04.520437 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.520441 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.520446 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.520451 | orchestrator | 2025-06-22 19:59:04.520456 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 19:59:04.520461 | orchestrator | Sunday 22 June 2025 19:57:12 +0000 (0:00:00.655) 0:09:28.816 *********** 2025-06-22 19:59:04.520465 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.520470 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.520475 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.520480 | orchestrator | 2025-06-22 19:59:04.520485 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 19:59:04.520493 | orchestrator | Sunday 22 June 2025 19:57:13 +0000 (0:00:00.774) 0:09:29.591 *********** 2025-06-22 19:59:04.520498 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.520508 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.520513 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.520518 | orchestrator | 2025-06-22 19:59:04.520523 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 19:59:04.520527 | orchestrator | Sunday 22 June 2025 19:57:13 +0000 (0:00:00.305) 0:09:29.896 *********** 2025-06-22 19:59:04.520532 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.520537 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.520542 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.520547 | orchestrator | 2025-06-22 19:59:04.520551 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 19:59:04.520556 | orchestrator | Sunday 22 June 2025 19:57:14 +0000 (0:00:00.618) 0:09:30.515 *********** 2025-06-22 19:59:04.520561 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.520566 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.520571 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.520575 | orchestrator | 2025-06-22 19:59:04.520580 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 19:59:04.520585 | orchestrator | Sunday 22 June 2025 19:57:14 +0000 (0:00:00.337) 0:09:30.852 *********** 2025-06-22 19:59:04.520590 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.520594 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.520599 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.520604 | orchestrator | 2025-06-22 19:59:04.520609 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 19:59:04.520651 | orchestrator | Sunday 22 June 2025 19:57:15 +0000 (0:00:00.366) 0:09:31.219 *********** 2025-06-22 19:59:04.520656 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.520660 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.520665 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.520669 | orchestrator | 2025-06-22 19:59:04.520674 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 19:59:04.520678 | orchestrator | Sunday 22 June 2025 19:57:15 +0000 (0:00:00.337) 0:09:31.556 *********** 2025-06-22 19:59:04.520683 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.520688 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.520692 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.520697 | orchestrator | 2025-06-22 19:59:04.520701 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 19:59:04.520706 | orchestrator | Sunday 22 June 2025 19:57:16 +0000 (0:00:00.639) 0:09:32.196 *********** 2025-06-22 19:59:04.520710 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.520715 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.520719 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.520724 | orchestrator | 2025-06-22 19:59:04.520728 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 19:59:04.520733 | orchestrator | Sunday 22 June 2025 19:57:16 +0000 (0:00:00.324) 0:09:32.520 *********** 2025-06-22 19:59:04.520738 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.520745 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.520750 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.520754 | orchestrator | 2025-06-22 19:59:04.520759 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 19:59:04.520763 | orchestrator | Sunday 22 June 2025 19:57:16 +0000 (0:00:00.304) 0:09:32.825 *********** 2025-06-22 19:59:04.520768 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.520772 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.520777 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.520781 | orchestrator | 2025-06-22 19:59:04.520786 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 19:59:04.520791 | orchestrator | Sunday 22 June 2025 19:57:17 +0000 (0:00:00.311) 0:09:33.137 *********** 2025-06-22 19:59:04.520795 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.520800 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.520804 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.520813 | orchestrator | 2025-06-22 19:59:04.520818 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-22 19:59:04.520823 | orchestrator | Sunday 22 June 2025 19:57:17 +0000 (0:00:00.866) 0:09:34.003 *********** 2025-06-22 19:59:04.520827 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.520832 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.520836 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-22 19:59:04.520841 | orchestrator | 2025-06-22 19:59:04.520845 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-22 19:59:04.520850 | orchestrator | Sunday 22 June 2025 19:57:18 +0000 (0:00:00.381) 0:09:34.384 *********** 2025-06-22 19:59:04.520854 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 19:59:04.520859 | orchestrator | 2025-06-22 19:59:04.520864 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-22 19:59:04.520868 | orchestrator | Sunday 22 June 2025 19:57:20 +0000 (0:00:02.180) 0:09:36.565 *********** 2025-06-22 19:59:04.520874 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-22 19:59:04.520881 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.520885 | orchestrator | 2025-06-22 19:59:04.520890 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-22 19:59:04.520894 | orchestrator | Sunday 22 June 2025 19:57:20 +0000 (0:00:00.239) 0:09:36.804 *********** 2025-06-22 19:59:04.520900 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 19:59:04.520913 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 19:59:04.520918 | orchestrator | 2025-06-22 19:59:04.520923 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-22 19:59:04.520927 | orchestrator | Sunday 22 June 2025 19:57:28 +0000 (0:00:07.888) 0:09:44.692 *********** 2025-06-22 19:59:04.520932 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 19:59:04.520936 | orchestrator | 2025-06-22 19:59:04.520941 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-22 19:59:04.520946 | orchestrator | Sunday 22 June 2025 19:57:32 +0000 (0:00:03.360) 0:09:48.052 *********** 2025-06-22 19:59:04.520950 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.520955 | orchestrator | 2025-06-22 19:59:04.520959 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-22 19:59:04.520964 | orchestrator | Sunday 22 June 2025 19:57:32 +0000 (0:00:00.607) 0:09:48.660 *********** 2025-06-22 19:59:04.520968 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-22 19:59:04.520973 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-22 19:59:04.520977 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-22 19:59:04.520982 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-22 19:59:04.520986 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-22 19:59:04.520991 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-22 19:59:04.520996 | orchestrator | 2025-06-22 19:59:04.521000 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-22 19:59:04.521012 | orchestrator | Sunday 22 June 2025 19:57:33 +0000 (0:00:01.065) 0:09:49.725 *********** 2025-06-22 19:59:04.521016 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.521024 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 19:59:04.521032 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 19:59:04.521040 | orchestrator | 2025-06-22 19:59:04.521046 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-22 19:59:04.521051 | orchestrator | Sunday 22 June 2025 19:57:36 +0000 (0:00:02.443) 0:09:52.169 *********** 2025-06-22 19:59:04.521055 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 19:59:04.521060 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 19:59:04.521067 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.521072 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 19:59:04.521076 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-22 19:59:04.521081 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.521085 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 19:59:04.521090 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-22 19:59:04.521095 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.521099 | orchestrator | 2025-06-22 19:59:04.521104 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-22 19:59:04.521108 | orchestrator | Sunday 22 June 2025 19:57:37 +0000 (0:00:01.467) 0:09:53.637 *********** 2025-06-22 19:59:04.521113 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.521117 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.521122 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.521126 | orchestrator | 2025-06-22 19:59:04.521131 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-22 19:59:04.521136 | orchestrator | Sunday 22 June 2025 19:57:40 +0000 (0:00:02.480) 0:09:56.117 *********** 2025-06-22 19:59:04.521140 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.521145 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.521149 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.521154 | orchestrator | 2025-06-22 19:59:04.521158 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-22 19:59:04.521163 | orchestrator | Sunday 22 June 2025 19:57:40 +0000 (0:00:00.332) 0:09:56.450 *********** 2025-06-22 19:59:04.521167 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.521172 | orchestrator | 2025-06-22 19:59:04.521177 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-22 19:59:04.521181 | orchestrator | Sunday 22 June 2025 19:57:41 +0000 (0:00:00.948) 0:09:57.398 *********** 2025-06-22 19:59:04.521186 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.521190 | orchestrator | 2025-06-22 19:59:04.521194 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-22 19:59:04.521199 | orchestrator | Sunday 22 June 2025 19:57:41 +0000 (0:00:00.547) 0:09:57.946 *********** 2025-06-22 19:59:04.521203 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.521208 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.521213 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.521231 | orchestrator | 2025-06-22 19:59:04.521236 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-22 19:59:04.521240 | orchestrator | Sunday 22 June 2025 19:57:43 +0000 (0:00:01.218) 0:09:59.165 *********** 2025-06-22 19:59:04.521245 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.521249 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.521254 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.521258 | orchestrator | 2025-06-22 19:59:04.521263 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-22 19:59:04.521270 | orchestrator | Sunday 22 June 2025 19:57:44 +0000 (0:00:01.469) 0:10:00.635 *********** 2025-06-22 19:59:04.521279 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.521284 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.521288 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.521293 | orchestrator | 2025-06-22 19:59:04.521297 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-22 19:59:04.521302 | orchestrator | Sunday 22 June 2025 19:57:46 +0000 (0:00:01.720) 0:10:02.356 *********** 2025-06-22 19:59:04.521306 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.521311 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.521315 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.521320 | orchestrator | 2025-06-22 19:59:04.521324 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-22 19:59:04.521329 | orchestrator | Sunday 22 June 2025 19:57:48 +0000 (0:00:01.893) 0:10:04.249 *********** 2025-06-22 19:59:04.521333 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.521338 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.521343 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.521347 | orchestrator | 2025-06-22 19:59:04.521352 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 19:59:04.521356 | orchestrator | Sunday 22 June 2025 19:57:49 +0000 (0:00:01.429) 0:10:05.679 *********** 2025-06-22 19:59:04.521361 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.521365 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.521370 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.521374 | orchestrator | 2025-06-22 19:59:04.521379 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-22 19:59:04.521383 | orchestrator | Sunday 22 June 2025 19:57:50 +0000 (0:00:00.664) 0:10:06.343 *********** 2025-06-22 19:59:04.521388 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.521393 | orchestrator | 2025-06-22 19:59:04.521397 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-22 19:59:04.521402 | orchestrator | Sunday 22 June 2025 19:57:51 +0000 (0:00:00.959) 0:10:07.302 *********** 2025-06-22 19:59:04.521406 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.521411 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.521415 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.521420 | orchestrator | 2025-06-22 19:59:04.521425 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-22 19:59:04.521429 | orchestrator | Sunday 22 June 2025 19:57:51 +0000 (0:00:00.372) 0:10:07.675 *********** 2025-06-22 19:59:04.521434 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.521438 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.521443 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.521447 | orchestrator | 2025-06-22 19:59:04.521452 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-22 19:59:04.521456 | orchestrator | Sunday 22 June 2025 19:57:52 +0000 (0:00:01.135) 0:10:08.810 *********** 2025-06-22 19:59:04.521461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.521468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.521473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.521477 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.521482 | orchestrator | 2025-06-22 19:59:04.521486 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-22 19:59:04.521491 | orchestrator | Sunday 22 June 2025 19:57:53 +0000 (0:00:00.937) 0:10:09.747 *********** 2025-06-22 19:59:04.521496 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.521500 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.521505 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.521509 | orchestrator | 2025-06-22 19:59:04.521514 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-22 19:59:04.521518 | orchestrator | 2025-06-22 19:59:04.521526 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 19:59:04.521531 | orchestrator | Sunday 22 June 2025 19:57:54 +0000 (0:00:00.984) 0:10:10.732 *********** 2025-06-22 19:59:04.521535 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.521540 | orchestrator | 2025-06-22 19:59:04.521544 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 19:59:04.521549 | orchestrator | Sunday 22 June 2025 19:57:55 +0000 (0:00:00.549) 0:10:11.282 *********** 2025-06-22 19:59:04.521554 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.521558 | orchestrator | 2025-06-22 19:59:04.521563 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 19:59:04.521567 | orchestrator | Sunday 22 June 2025 19:57:56 +0000 (0:00:00.833) 0:10:12.115 *********** 2025-06-22 19:59:04.521572 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.521576 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.521581 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.521585 | orchestrator | 2025-06-22 19:59:04.521590 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 19:59:04.521594 | orchestrator | Sunday 22 June 2025 19:57:56 +0000 (0:00:00.332) 0:10:12.447 *********** 2025-06-22 19:59:04.521599 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.521603 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.521608 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.521613 | orchestrator | 2025-06-22 19:59:04.521617 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 19:59:04.521622 | orchestrator | Sunday 22 June 2025 19:57:57 +0000 (0:00:00.675) 0:10:13.123 *********** 2025-06-22 19:59:04.521626 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.521631 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.521635 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.521640 | orchestrator | 2025-06-22 19:59:04.521644 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 19:59:04.521649 | orchestrator | Sunday 22 June 2025 19:57:57 +0000 (0:00:00.699) 0:10:13.823 *********** 2025-06-22 19:59:04.521654 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.521660 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.521665 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.521669 | orchestrator | 2025-06-22 19:59:04.521674 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 19:59:04.521679 | orchestrator | Sunday 22 June 2025 19:57:58 +0000 (0:00:01.007) 0:10:14.830 *********** 2025-06-22 19:59:04.521683 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.521688 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.521692 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.521697 | orchestrator | 2025-06-22 19:59:04.521701 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 19:59:04.521706 | orchestrator | Sunday 22 June 2025 19:57:59 +0000 (0:00:00.359) 0:10:15.189 *********** 2025-06-22 19:59:04.521711 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.521715 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.521719 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.521724 | orchestrator | 2025-06-22 19:59:04.521729 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 19:59:04.521733 | orchestrator | Sunday 22 June 2025 19:57:59 +0000 (0:00:00.318) 0:10:15.507 *********** 2025-06-22 19:59:04.521738 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.521742 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.521747 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.521751 | orchestrator | 2025-06-22 19:59:04.521756 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 19:59:04.521760 | orchestrator | Sunday 22 June 2025 19:57:59 +0000 (0:00:00.296) 0:10:15.804 *********** 2025-06-22 19:59:04.521769 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.521773 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.521778 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.521782 | orchestrator | 2025-06-22 19:59:04.521787 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 19:59:04.521791 | orchestrator | Sunday 22 June 2025 19:58:00 +0000 (0:00:01.037) 0:10:16.842 *********** 2025-06-22 19:59:04.521796 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.521800 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.521805 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.521809 | orchestrator | 2025-06-22 19:59:04.521814 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 19:59:04.521820 | orchestrator | Sunday 22 June 2025 19:58:01 +0000 (0:00:00.642) 0:10:17.485 *********** 2025-06-22 19:59:04.521828 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.521836 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.521842 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.521847 | orchestrator | 2025-06-22 19:59:04.521851 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 19:59:04.521856 | orchestrator | Sunday 22 June 2025 19:58:01 +0000 (0:00:00.276) 0:10:17.761 *********** 2025-06-22 19:59:04.521861 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.521865 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.521870 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.521874 | orchestrator | 2025-06-22 19:59:04.521882 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 19:59:04.521886 | orchestrator | Sunday 22 June 2025 19:58:01 +0000 (0:00:00.256) 0:10:18.017 *********** 2025-06-22 19:59:04.521891 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.521895 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.521900 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.521904 | orchestrator | 2025-06-22 19:59:04.521909 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 19:59:04.521913 | orchestrator | Sunday 22 June 2025 19:58:02 +0000 (0:00:00.503) 0:10:18.521 *********** 2025-06-22 19:59:04.521918 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.521922 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.521926 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.521931 | orchestrator | 2025-06-22 19:59:04.521935 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 19:59:04.521940 | orchestrator | Sunday 22 June 2025 19:58:02 +0000 (0:00:00.283) 0:10:18.804 *********** 2025-06-22 19:59:04.521945 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.521949 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.521953 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.521958 | orchestrator | 2025-06-22 19:59:04.521962 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 19:59:04.521967 | orchestrator | Sunday 22 June 2025 19:58:03 +0000 (0:00:00.298) 0:10:19.102 *********** 2025-06-22 19:59:04.521971 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.521976 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.521980 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.521985 | orchestrator | 2025-06-22 19:59:04.521989 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 19:59:04.521994 | orchestrator | Sunday 22 June 2025 19:58:03 +0000 (0:00:00.275) 0:10:19.377 *********** 2025-06-22 19:59:04.521998 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.522003 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.522007 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.522042 | orchestrator | 2025-06-22 19:59:04.522048 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 19:59:04.522052 | orchestrator | Sunday 22 June 2025 19:58:03 +0000 (0:00:00.478) 0:10:19.856 *********** 2025-06-22 19:59:04.522057 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.522065 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.522070 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.522074 | orchestrator | 2025-06-22 19:59:04.522079 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 19:59:04.522084 | orchestrator | Sunday 22 June 2025 19:58:04 +0000 (0:00:00.273) 0:10:20.129 *********** 2025-06-22 19:59:04.522088 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.522093 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.522097 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.522102 | orchestrator | 2025-06-22 19:59:04.522106 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 19:59:04.522111 | orchestrator | Sunday 22 June 2025 19:58:04 +0000 (0:00:00.329) 0:10:20.459 *********** 2025-06-22 19:59:04.522116 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.522120 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.522124 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.522129 | orchestrator | 2025-06-22 19:59:04.522137 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-22 19:59:04.522142 | orchestrator | Sunday 22 June 2025 19:58:05 +0000 (0:00:00.814) 0:10:21.273 *********** 2025-06-22 19:59:04.522146 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.522151 | orchestrator | 2025-06-22 19:59:04.522156 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-22 19:59:04.522160 | orchestrator | Sunday 22 June 2025 19:58:05 +0000 (0:00:00.479) 0:10:21.753 *********** 2025-06-22 19:59:04.522165 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.522169 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 19:59:04.522174 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 19:59:04.522178 | orchestrator | 2025-06-22 19:59:04.522183 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-22 19:59:04.522187 | orchestrator | Sunday 22 June 2025 19:58:07 +0000 (0:00:01.998) 0:10:23.751 *********** 2025-06-22 19:59:04.522192 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 19:59:04.522196 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 19:59:04.522201 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.522205 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 19:59:04.522210 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-22 19:59:04.522215 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.522236 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 19:59:04.522244 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-22 19:59:04.522251 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.522259 | orchestrator | 2025-06-22 19:59:04.522266 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-22 19:59:04.522273 | orchestrator | Sunday 22 June 2025 19:58:09 +0000 (0:00:01.345) 0:10:25.097 *********** 2025-06-22 19:59:04.522281 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.522285 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.522290 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.522294 | orchestrator | 2025-06-22 19:59:04.522299 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-22 19:59:04.522303 | orchestrator | Sunday 22 June 2025 19:58:09 +0000 (0:00:00.488) 0:10:25.586 *********** 2025-06-22 19:59:04.522308 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.522312 | orchestrator | 2025-06-22 19:59:04.522317 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-22 19:59:04.522321 | orchestrator | Sunday 22 June 2025 19:58:10 +0000 (0:00:00.483) 0:10:26.069 *********** 2025-06-22 19:59:04.522329 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.522338 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.522343 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.522347 | orchestrator | 2025-06-22 19:59:04.522352 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-22 19:59:04.522356 | orchestrator | Sunday 22 June 2025 19:58:11 +0000 (0:00:01.250) 0:10:27.319 *********** 2025-06-22 19:59:04.522361 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.522365 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-22 19:59:04.522370 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.522374 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-22 19:59:04.522379 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.522383 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-22 19:59:04.522388 | orchestrator | 2025-06-22 19:59:04.522392 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-22 19:59:04.522397 | orchestrator | Sunday 22 June 2025 19:58:15 +0000 (0:00:04.106) 0:10:31.426 *********** 2025-06-22 19:59:04.522402 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.522406 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 19:59:04.522411 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.522415 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 19:59:04.522420 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 19:59:04.522424 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 19:59:04.522429 | orchestrator | 2025-06-22 19:59:04.522433 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-22 19:59:04.522438 | orchestrator | Sunday 22 June 2025 19:58:17 +0000 (0:00:02.173) 0:10:33.599 *********** 2025-06-22 19:59:04.522445 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 19:59:04.522450 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.522455 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 19:59:04.522459 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.522464 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 19:59:04.522468 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.522473 | orchestrator | 2025-06-22 19:59:04.522477 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-22 19:59:04.522482 | orchestrator | Sunday 22 June 2025 19:58:18 +0000 (0:00:01.170) 0:10:34.770 *********** 2025-06-22 19:59:04.522486 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-22 19:59:04.522491 | orchestrator | 2025-06-22 19:59:04.522495 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-22 19:59:04.522500 | orchestrator | Sunday 22 June 2025 19:58:18 +0000 (0:00:00.225) 0:10:34.995 *********** 2025-06-22 19:59:04.522504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 19:59:04.522510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 19:59:04.522517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 19:59:04.522522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 19:59:04.522527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 19:59:04.522531 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.522536 | orchestrator | 2025-06-22 19:59:04.522540 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-22 19:59:04.522545 | orchestrator | Sunday 22 June 2025 19:58:20 +0000 (0:00:01.158) 0:10:36.154 *********** 2025-06-22 19:59:04.522549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 19:59:04.522554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 19:59:04.522558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 19:59:04.522566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 19:59:04.522570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 19:59:04.522575 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.522579 | orchestrator | 2025-06-22 19:59:04.522584 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-22 19:59:04.522589 | orchestrator | Sunday 22 June 2025 19:58:20 +0000 (0:00:00.585) 0:10:36.739 *********** 2025-06-22 19:59:04.522593 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 19:59:04.522598 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 19:59:04.522602 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 19:59:04.522607 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 19:59:04.522612 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 19:59:04.522616 | orchestrator | 2025-06-22 19:59:04.522621 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-22 19:59:04.522625 | orchestrator | Sunday 22 June 2025 19:58:49 +0000 (0:00:28.634) 0:11:05.374 *********** 2025-06-22 19:59:04.522630 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.522634 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.522639 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.522643 | orchestrator | 2025-06-22 19:59:04.522648 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-22 19:59:04.522652 | orchestrator | Sunday 22 June 2025 19:58:49 +0000 (0:00:00.323) 0:11:05.698 *********** 2025-06-22 19:59:04.522657 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.522661 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.522666 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.522670 | orchestrator | 2025-06-22 19:59:04.522675 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-22 19:59:04.522679 | orchestrator | Sunday 22 June 2025 19:58:50 +0000 (0:00:00.377) 0:11:06.076 *********** 2025-06-22 19:59:04.522690 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.522695 | orchestrator | 2025-06-22 19:59:04.522700 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-22 19:59:04.522728 | orchestrator | Sunday 22 June 2025 19:58:51 +0000 (0:00:00.947) 0:11:07.023 *********** 2025-06-22 19:59:04.522733 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.522738 | orchestrator | 2025-06-22 19:59:04.522743 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-22 19:59:04.522747 | orchestrator | Sunday 22 June 2025 19:58:51 +0000 (0:00:00.549) 0:11:07.573 *********** 2025-06-22 19:59:04.522752 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.522756 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.522761 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.522765 | orchestrator | 2025-06-22 19:59:04.522770 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-22 19:59:04.522774 | orchestrator | Sunday 22 June 2025 19:58:52 +0000 (0:00:01.213) 0:11:08.787 *********** 2025-06-22 19:59:04.522779 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.522784 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.522788 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.522792 | orchestrator | 2025-06-22 19:59:04.522797 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-22 19:59:04.522802 | orchestrator | Sunday 22 June 2025 19:58:54 +0000 (0:00:01.535) 0:11:10.323 *********** 2025-06-22 19:59:04.522806 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:59:04.522811 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:59:04.522815 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:59:04.522820 | orchestrator | 2025-06-22 19:59:04.522824 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-22 19:59:04.522829 | orchestrator | Sunday 22 June 2025 19:58:56 +0000 (0:00:01.721) 0:11:12.045 *********** 2025-06-22 19:59:04.522834 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.522838 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.522843 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 19:59:04.522847 | orchestrator | 2025-06-22 19:59:04.522852 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 19:59:04.522857 | orchestrator | Sunday 22 June 2025 19:58:58 +0000 (0:00:02.547) 0:11:14.593 *********** 2025-06-22 19:59:04.522861 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.522866 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.522870 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.522875 | orchestrator | 2025-06-22 19:59:04.522882 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-22 19:59:04.522887 | orchestrator | Sunday 22 June 2025 19:58:58 +0000 (0:00:00.355) 0:11:14.948 *********** 2025-06-22 19:59:04.522891 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:59:04.522896 | orchestrator | 2025-06-22 19:59:04.522900 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-22 19:59:04.522905 | orchestrator | Sunday 22 June 2025 19:58:59 +0000 (0:00:00.522) 0:11:15.470 *********** 2025-06-22 19:59:04.522909 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.522914 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.522918 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.522923 | orchestrator | 2025-06-22 19:59:04.522928 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-22 19:59:04.522932 | orchestrator | Sunday 22 June 2025 19:59:00 +0000 (0:00:00.594) 0:11:16.064 *********** 2025-06-22 19:59:04.522940 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.522945 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:59:04.522949 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:59:04.522954 | orchestrator | 2025-06-22 19:59:04.522958 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-22 19:59:04.522963 | orchestrator | Sunday 22 June 2025 19:59:00 +0000 (0:00:00.351) 0:11:16.416 *********** 2025-06-22 19:59:04.522967 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:59:04.522972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:59:04.522976 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:59:04.522981 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:59:04.522985 | orchestrator | 2025-06-22 19:59:04.522990 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-22 19:59:04.522994 | orchestrator | Sunday 22 June 2025 19:59:01 +0000 (0:00:00.749) 0:11:17.166 *********** 2025-06-22 19:59:04.522999 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:59:04.523003 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:59:04.523008 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:59:04.523012 | orchestrator | 2025-06-22 19:59:04.523017 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:59:04.523022 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-06-22 19:59:04.523026 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-22 19:59:04.523031 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-22 19:59:04.523039 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-06-22 19:59:04.523043 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-22 19:59:04.523048 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-22 19:59:04.523053 | orchestrator | 2025-06-22 19:59:04.523057 | orchestrator | 2025-06-22 19:59:04.523062 | orchestrator | 2025-06-22 19:59:04.523066 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:59:04.523071 | orchestrator | Sunday 22 June 2025 19:59:01 +0000 (0:00:00.256) 0:11:17.423 *********** 2025-06-22 19:59:04.523075 | orchestrator | =============================================================================== 2025-06-22 19:59:04.523080 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 76.06s 2025-06-22 19:59:04.523085 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 47.40s 2025-06-22 19:59:04.523089 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.21s 2025-06-22 19:59:04.523094 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 28.63s 2025-06-22 19:59:04.523098 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.78s 2025-06-22 19:59:04.523103 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.65s 2025-06-22 19:59:04.523107 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.28s 2025-06-22 19:59:04.523112 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 12.07s 2025-06-22 19:59:04.523116 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.29s 2025-06-22 19:59:04.523121 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.89s 2025-06-22 19:59:04.523131 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.86s 2025-06-22 19:59:04.523135 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 5.98s 2025-06-22 19:59:04.523140 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.82s 2025-06-22 19:59:04.523144 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.11s 2025-06-22 19:59:04.523149 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.87s 2025-06-22 19:59:04.523153 | orchestrator | ceph-osd : Collect osd ids ---------------------------------------------- 3.38s 2025-06-22 19:59:04.523161 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.36s 2025-06-22 19:59:04.523165 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.29s 2025-06-22 19:59:04.523170 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.29s 2025-06-22 19:59:04.523174 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.20s 2025-06-22 19:59:07.558826 | orchestrator | 2025-06-22 19:59:07 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:07.560126 | orchestrator | 2025-06-22 19:59:07 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:07.561680 | orchestrator | 2025-06-22 19:59:07 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:59:07.561704 | orchestrator | 2025-06-22 19:59:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:10.612962 | orchestrator | 2025-06-22 19:59:10 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:10.614572 | orchestrator | 2025-06-22 19:59:10 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:10.616849 | orchestrator | 2025-06-22 19:59:10 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:59:10.617457 | orchestrator | 2025-06-22 19:59:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:13.668070 | orchestrator | 2025-06-22 19:59:13 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:13.668784 | orchestrator | 2025-06-22 19:59:13 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:13.669960 | orchestrator | 2025-06-22 19:59:13 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:59:13.669986 | orchestrator | 2025-06-22 19:59:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:16.719555 | orchestrator | 2025-06-22 19:59:16 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:16.720977 | orchestrator | 2025-06-22 19:59:16 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:16.723185 | orchestrator | 2025-06-22 19:59:16 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:59:16.723928 | orchestrator | 2025-06-22 19:59:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:19.768873 | orchestrator | 2025-06-22 19:59:19 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:19.768968 | orchestrator | 2025-06-22 19:59:19 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:19.772898 | orchestrator | 2025-06-22 19:59:19 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:59:19.772940 | orchestrator | 2025-06-22 19:59:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:22.819507 | orchestrator | 2025-06-22 19:59:22 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:22.819802 | orchestrator | 2025-06-22 19:59:22 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:22.820826 | orchestrator | 2025-06-22 19:59:22 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state STARTED 2025-06-22 19:59:22.820850 | orchestrator | 2025-06-22 19:59:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:25.883906 | orchestrator | 2025-06-22 19:59:25 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:25.885060 | orchestrator | 2025-06-22 19:59:25 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:25.888814 | orchestrator | 2025-06-22 19:59:25.888886 | orchestrator | 2025-06-22 19:59:25.888901 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:59:25.888913 | orchestrator | 2025-06-22 19:59:25.888923 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:59:25.888933 | orchestrator | Sunday 22 June 2025 19:56:36 +0000 (0:00:00.343) 0:00:00.343 *********** 2025-06-22 19:59:25.888943 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:25.888953 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:25.888963 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:25.888972 | orchestrator | 2025-06-22 19:59:25.888981 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:59:25.888990 | orchestrator | Sunday 22 June 2025 19:56:36 +0000 (0:00:00.305) 0:00:00.648 *********** 2025-06-22 19:59:25.889000 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-22 19:59:25.889010 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-22 19:59:25.889020 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-22 19:59:25.889028 | orchestrator | 2025-06-22 19:59:25.889038 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-22 19:59:25.889048 | orchestrator | 2025-06-22 19:59:25.889067 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 19:59:25.889077 | orchestrator | Sunday 22 June 2025 19:56:36 +0000 (0:00:00.433) 0:00:01.082 *********** 2025-06-22 19:59:25.889087 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:25.889097 | orchestrator | 2025-06-22 19:59:25.889107 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-22 19:59:25.889115 | orchestrator | Sunday 22 June 2025 19:56:37 +0000 (0:00:00.532) 0:00:01.614 *********** 2025-06-22 19:59:25.889124 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 19:59:25.889133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 19:59:25.889143 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 19:59:25.889152 | orchestrator | 2025-06-22 19:59:25.889162 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-22 19:59:25.889171 | orchestrator | Sunday 22 June 2025 19:56:38 +0000 (0:00:00.677) 0:00:02.292 *********** 2025-06-22 19:59:25.889185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:25.889243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:25.889272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:25.889291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:25.889304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:25.889315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:25.889334 | orchestrator | 2025-06-22 19:59:25.889345 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 19:59:25.889357 | orchestrator | Sunday 22 June 2025 19:56:40 +0000 (0:00:02.163) 0:00:04.456 *********** 2025-06-22 19:59:25.889368 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:25.889380 | orchestrator | 2025-06-22 19:59:25.889390 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-22 19:59:25.889401 | orchestrator | Sunday 22 June 2025 19:56:40 +0000 (0:00:00.671) 0:00:05.127 *********** 2025-06-22 19:59:25.889421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:25.889438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:25.889449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:25.889466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:25.889487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:25.889504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:25.889515 | orchestrator | 2025-06-22 19:59:25.889525 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-22 19:59:25.889536 | orchestrator | Sunday 22 June 2025 19:56:43 +0000 (0:00:02.955) 0:00:08.082 *********** 2025-06-22 19:59:25.889546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:59:25.889563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:59:25.889574 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:25.889594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:59:25.889610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:59:25.889621 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:25.889632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:59:25.889652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:59:25.889666 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:25.889678 | orchestrator | 2025-06-22 19:59:25.889691 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-22 19:59:25.889703 | orchestrator | Sunday 22 June 2025 19:56:45 +0000 (0:00:01.981) 0:00:10.064 *********** 2025-06-22 19:59:25.889722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:59:25.889739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:59:25.889751 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:25.889762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:59:25.889781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:59:25.889792 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:25.889807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:59:25.889823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:59:25.889835 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:25.889845 | orchestrator | 2025-06-22 19:59:25.889855 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-22 19:59:25.889872 | orchestrator | Sunday 22 June 2025 19:56:46 +0000 (0:00:00.920) 0:00:10.984 *********** 2025-06-22 19:59:25.889886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:25.889897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:25.889908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:25.889926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:25.889942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:25.889960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:25.889970 | orchestrator | 2025-06-22 19:59:25.889981 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-22 19:59:25.889991 | orchestrator | Sunday 22 June 2025 19:56:49 +0000 (0:00:02.432) 0:00:13.417 *********** 2025-06-22 19:59:25.890001 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:25.890010 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:25.890071 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:25.890082 | orchestrator | 2025-06-22 19:59:25.890091 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-22 19:59:25.890100 | orchestrator | Sunday 22 June 2025 19:56:52 +0000 (0:00:02.939) 0:00:16.356 *********** 2025-06-22 19:59:25.890110 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:25.890121 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:25.890132 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:25.890142 | orchestrator | 2025-06-22 19:59:25.890152 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-22 19:59:25.890163 | orchestrator | Sunday 22 June 2025 19:56:54 +0000 (0:00:02.224) 0:00:18.581 *********** 2025-06-22 19:59:25.890184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:25.890201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:25.890268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:25.890279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:25.890299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:25.890314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:25.890335 | orchestrator | 2025-06-22 19:59:25.890345 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 19:59:25.890356 | orchestrator | Sunday 22 June 2025 19:56:56 +0000 (0:00:02.472) 0:00:21.053 *********** 2025-06-22 19:59:25.890366 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:25.890375 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:25.890385 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:25.890394 | orchestrator | 2025-06-22 19:59:25.890404 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-22 19:59:25.890413 | orchestrator | Sunday 22 June 2025 19:56:57 +0000 (0:00:00.246) 0:00:21.299 *********** 2025-06-22 19:59:25.890423 | orchestrator | 2025-06-22 19:59:25.890432 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-22 19:59:25.890442 | orchestrator | Sunday 22 June 2025 19:56:57 +0000 (0:00:00.057) 0:00:21.356 *********** 2025-06-22 19:59:25.890453 | orchestrator | 2025-06-22 19:59:25.890462 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-22 19:59:25.890471 | orchestrator | Sunday 22 June 2025 19:56:57 +0000 (0:00:00.061) 0:00:21.418 *********** 2025-06-22 19:59:25.890481 | orchestrator | 2025-06-22 19:59:25.890491 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-22 19:59:25.890501 | orchestrator | Sunday 22 June 2025 19:56:57 +0000 (0:00:00.169) 0:00:21.587 *********** 2025-06-22 19:59:25.890511 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:25.890520 | orchestrator | 2025-06-22 19:59:25.890530 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-22 19:59:25.890539 | orchestrator | Sunday 22 June 2025 19:56:57 +0000 (0:00:00.170) 0:00:21.758 *********** 2025-06-22 19:59:25.890548 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:25.890558 | orchestrator | 2025-06-22 19:59:25.890568 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-22 19:59:25.890576 | orchestrator | Sunday 22 June 2025 19:56:57 +0000 (0:00:00.169) 0:00:21.927 *********** 2025-06-22 19:59:25.890586 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:25.890594 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:25.890604 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:25.890614 | orchestrator | 2025-06-22 19:59:25.890623 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-22 19:59:25.890633 | orchestrator | Sunday 22 June 2025 19:58:04 +0000 (0:01:06.375) 0:01:28.302 *********** 2025-06-22 19:59:25.890642 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:25.890652 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:25.890662 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:25.890671 | orchestrator | 2025-06-22 19:59:25.890679 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 19:59:25.890688 | orchestrator | Sunday 22 June 2025 19:59:14 +0000 (0:01:10.100) 0:02:38.403 *********** 2025-06-22 19:59:25.890697 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:25.890705 | orchestrator | 2025-06-22 19:59:25.890714 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-22 19:59:25.890733 | orchestrator | Sunday 22 June 2025 19:59:14 +0000 (0:00:00.765) 0:02:39.169 *********** 2025-06-22 19:59:25.890743 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:25.890754 | orchestrator | 2025-06-22 19:59:25.890763 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-22 19:59:25.890773 | orchestrator | Sunday 22 June 2025 19:59:17 +0000 (0:00:02.333) 0:02:41.503 *********** 2025-06-22 19:59:25.890783 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:25.890793 | orchestrator | 2025-06-22 19:59:25.890802 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-22 19:59:25.890812 | orchestrator | Sunday 22 June 2025 19:59:19 +0000 (0:00:02.245) 0:02:43.748 *********** 2025-06-22 19:59:25.890821 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:25.890830 | orchestrator | 2025-06-22 19:59:25.890840 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-22 19:59:25.890850 | orchestrator | Sunday 22 June 2025 19:59:22 +0000 (0:00:02.741) 0:02:46.489 *********** 2025-06-22 19:59:25.890860 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:25.890869 | orchestrator | 2025-06-22 19:59:25.890888 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:59:25.890900 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:59:25.890910 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:59:25.890919 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:59:25.890929 | orchestrator | 2025-06-22 19:59:25.890938 | orchestrator | 2025-06-22 19:59:25.890948 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:59:25.890957 | orchestrator | Sunday 22 June 2025 19:59:24 +0000 (0:00:02.531) 0:02:49.021 *********** 2025-06-22 19:59:25.890967 | orchestrator | =============================================================================== 2025-06-22 19:59:25.890978 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 70.10s 2025-06-22 19:59:25.890994 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.38s 2025-06-22 19:59:25.891005 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.96s 2025-06-22 19:59:25.891015 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.94s 2025-06-22 19:59:25.891023 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.74s 2025-06-22 19:59:25.891032 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.53s 2025-06-22 19:59:25.891041 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.47s 2025-06-22 19:59:25.891051 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.43s 2025-06-22 19:59:25.891060 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.33s 2025-06-22 19:59:25.891069 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.25s 2025-06-22 19:59:25.891078 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.22s 2025-06-22 19:59:25.891087 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.16s 2025-06-22 19:59:25.891097 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.98s 2025-06-22 19:59:25.891106 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.92s 2025-06-22 19:59:25.891115 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.77s 2025-06-22 19:59:25.891124 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2025-06-22 19:59:25.891133 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.67s 2025-06-22 19:59:25.891152 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-06-22 19:59:25.891162 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-06-22 19:59:25.891171 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-06-22 19:59:25.891180 | orchestrator | 2025-06-22 19:59:25 | INFO  | Task 0ce98e6c-fb32-4d98-9d67-56ccb4f674c3 is in state SUCCESS 2025-06-22 19:59:25.891190 | orchestrator | 2025-06-22 19:59:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:28.933169 | orchestrator | 2025-06-22 19:59:28 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:28.934454 | orchestrator | 2025-06-22 19:59:28 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:28.934503 | orchestrator | 2025-06-22 19:59:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:31.976538 | orchestrator | 2025-06-22 19:59:31 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:31.978242 | orchestrator | 2025-06-22 19:59:31 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:31.978284 | orchestrator | 2025-06-22 19:59:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:35.035583 | orchestrator | 2025-06-22 19:59:35 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:35.035715 | orchestrator | 2025-06-22 19:59:35 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:35.035862 | orchestrator | 2025-06-22 19:59:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:38.092728 | orchestrator | 2025-06-22 19:59:38 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:38.093610 | orchestrator | 2025-06-22 19:59:38 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:38.093652 | orchestrator | 2025-06-22 19:59:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:41.142409 | orchestrator | 2025-06-22 19:59:41 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:41.144589 | orchestrator | 2025-06-22 19:59:41 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:41.144628 | orchestrator | 2025-06-22 19:59:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:44.196142 | orchestrator | 2025-06-22 19:59:44 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:44.197151 | orchestrator | 2025-06-22 19:59:44 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:44.197321 | orchestrator | 2025-06-22 19:59:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:47.252496 | orchestrator | 2025-06-22 19:59:47 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state STARTED 2025-06-22 19:59:47.254209 | orchestrator | 2025-06-22 19:59:47 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:47.254241 | orchestrator | 2025-06-22 19:59:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:50.334449 | orchestrator | 2025-06-22 19:59:50 | INFO  | Task 87e6422b-2462-43c3-bbe3-8edf2376b369 is in state SUCCESS 2025-06-22 19:59:50.336621 | orchestrator | 2025-06-22 19:59:50.336667 | orchestrator | 2025-06-22 19:59:50.336681 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-22 19:59:50.336694 | orchestrator | 2025-06-22 19:59:50.336705 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-22 19:59:50.336717 | orchestrator | Sunday 22 June 2025 19:56:35 +0000 (0:00:00.120) 0:00:00.120 *********** 2025-06-22 19:59:50.336754 | orchestrator | ok: [localhost] => { 2025-06-22 19:59:50.336768 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-22 19:59:50.336779 | orchestrator | } 2025-06-22 19:59:50.336790 | orchestrator | 2025-06-22 19:59:50.336801 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-22 19:59:50.336812 | orchestrator | Sunday 22 June 2025 19:56:36 +0000 (0:00:00.062) 0:00:00.182 *********** 2025-06-22 19:59:50.336823 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-22 19:59:50.336835 | orchestrator | ...ignoring 2025-06-22 19:59:50.336846 | orchestrator | 2025-06-22 19:59:50.336857 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-22 19:59:50.336868 | orchestrator | Sunday 22 June 2025 19:56:38 +0000 (0:00:02.939) 0:00:03.122 *********** 2025-06-22 19:59:50.336879 | orchestrator | skipping: [localhost] 2025-06-22 19:59:50.336889 | orchestrator | 2025-06-22 19:59:50.336900 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-22 19:59:50.336911 | orchestrator | Sunday 22 June 2025 19:56:39 +0000 (0:00:00.067) 0:00:03.189 *********** 2025-06-22 19:59:50.336922 | orchestrator | ok: [localhost] 2025-06-22 19:59:50.336932 | orchestrator | 2025-06-22 19:59:50.336944 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:59:50.336954 | orchestrator | 2025-06-22 19:59:50.336965 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:59:50.336976 | orchestrator | Sunday 22 June 2025 19:56:39 +0000 (0:00:00.211) 0:00:03.401 *********** 2025-06-22 19:59:50.336987 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.336997 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.337008 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.337018 | orchestrator | 2025-06-22 19:59:50.337029 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:59:50.337040 | orchestrator | Sunday 22 June 2025 19:56:39 +0000 (0:00:00.337) 0:00:03.738 *********** 2025-06-22 19:59:50.337051 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-22 19:59:50.337061 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-22 19:59:50.337072 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-22 19:59:50.337083 | orchestrator | 2025-06-22 19:59:50.337094 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-22 19:59:50.337104 | orchestrator | 2025-06-22 19:59:50.337115 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-22 19:59:50.337126 | orchestrator | Sunday 22 June 2025 19:56:40 +0000 (0:00:00.922) 0:00:04.660 *********** 2025-06-22 19:59:50.337136 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 19:59:50.337147 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-22 19:59:50.337158 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-22 19:59:50.337168 | orchestrator | 2025-06-22 19:59:50.337179 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 19:59:50.337224 | orchestrator | Sunday 22 June 2025 19:56:40 +0000 (0:00:00.468) 0:00:05.129 *********** 2025-06-22 19:59:50.337237 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.337250 | orchestrator | 2025-06-22 19:59:50.337263 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-22 19:59:50.337275 | orchestrator | Sunday 22 June 2025 19:56:41 +0000 (0:00:00.809) 0:00:05.938 *********** 2025-06-22 19:59:50.337316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 19:59:50.337345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 19:59:50.337365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 19:59:50.337386 | orchestrator | 2025-06-22 19:59:50.337408 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-22 19:59:50.337421 | orchestrator | Sunday 22 June 2025 19:56:45 +0000 (0:00:03.786) 0:00:09.724 *********** 2025-06-22 19:59:50.337433 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.337444 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.337456 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.337468 | orchestrator | 2025-06-22 19:59:50.337480 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-22 19:59:50.337492 | orchestrator | Sunday 22 June 2025 19:56:46 +0000 (0:00:00.685) 0:00:10.410 *********** 2025-06-22 19:59:50.337504 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.337516 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.337528 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.337540 | orchestrator | 2025-06-22 19:59:50.337552 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-22 19:59:50.337564 | orchestrator | Sunday 22 June 2025 19:56:47 +0000 (0:00:01.392) 0:00:11.802 *********** 2025-06-22 19:59:50.337578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 19:59:50.337615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 19:59:50.337629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 19:59:50.337642 | orchestrator | 2025-06-22 19:59:50.337653 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-22 19:59:50.337664 | orchestrator | Sunday 22 June 2025 19:56:51 +0000 (0:00:03.491) 0:00:15.294 *********** 2025-06-22 19:59:50.337682 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.337693 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.337704 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.337715 | orchestrator | 2025-06-22 19:59:50.337726 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-22 19:59:50.337737 | orchestrator | Sunday 22 June 2025 19:56:52 +0000 (0:00:01.124) 0:00:16.419 *********** 2025-06-22 19:59:50.337748 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.337759 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.337770 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.337781 | orchestrator | 2025-06-22 19:59:50.337792 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 19:59:50.337802 | orchestrator | Sunday 22 June 2025 19:56:57 +0000 (0:00:04.985) 0:00:21.404 *********** 2025-06-22 19:59:50.337813 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.337824 | orchestrator | 2025-06-22 19:59:50.337835 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-22 19:59:50.337846 | orchestrator | Sunday 22 June 2025 19:56:57 +0000 (0:00:00.485) 0:00:21.890 *********** 2025-06-22 19:59:50.337871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.337884 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.337896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.337915 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.337939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.337952 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.337963 | orchestrator | 2025-06-22 19:59:50.337974 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-22 19:59:50.337987 | orchestrator | Sunday 22 June 2025 19:57:01 +0000 (0:00:03.556) 0:00:25.446 *********** 2025-06-22 19:59:50.338006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.338081 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.338117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.338130 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.338142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.338161 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.338172 | orchestrator | 2025-06-22 19:59:50.338183 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-22 19:59:50.338219 | orchestrator | Sunday 22 June 2025 19:57:04 +0000 (0:00:03.678) 0:00:29.125 *********** 2025-06-22 19:59:50.338244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.338257 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.338269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.338287 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.338299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.338311 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.338322 | orchestrator | 2025-06-22 19:59:50.338337 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-22 19:59:50.338348 | orchestrator | Sunday 22 June 2025 19:57:08 +0000 (0:00:03.643) 0:00:32.769 *********** 2025-06-22 19:59:50.338368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 19:59:50.338389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 19:59:50.338415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 19:59:50.338436 | orchestrator | 2025-06-22 19:59:50.338447 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-22 19:59:50.338458 | orchestrator | Sunday 22 June 2025 19:57:12 +0000 (0:00:03.882) 0:00:36.651 *********** 2025-06-22 19:59:50.338468 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.338479 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.338490 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.338501 | orchestrator | 2025-06-22 19:59:50.338512 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-22 19:59:50.338523 | orchestrator | Sunday 22 June 2025 19:57:13 +0000 (0:00:01.058) 0:00:37.710 *********** 2025-06-22 19:59:50.338533 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.338544 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.338555 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.338566 | orchestrator | 2025-06-22 19:59:50.338577 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-22 19:59:50.338588 | orchestrator | Sunday 22 June 2025 19:57:13 +0000 (0:00:00.318) 0:00:38.028 *********** 2025-06-22 19:59:50.338599 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.338610 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.338621 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.338631 | orchestrator | 2025-06-22 19:59:50.338642 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-22 19:59:50.338653 | orchestrator | Sunday 22 June 2025 19:57:14 +0000 (0:00:00.322) 0:00:38.351 *********** 2025-06-22 19:59:50.338665 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-22 19:59:50.338676 | orchestrator | ...ignoring 2025-06-22 19:59:50.338687 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-22 19:59:50.338698 | orchestrator | ...ignoring 2025-06-22 19:59:50.338709 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-22 19:59:50.338720 | orchestrator | ...ignoring 2025-06-22 19:59:50.338731 | orchestrator | 2025-06-22 19:59:50.338742 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-22 19:59:50.338753 | orchestrator | Sunday 22 June 2025 19:57:25 +0000 (0:00:10.813) 0:00:49.164 *********** 2025-06-22 19:59:50.338764 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.338774 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.338785 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.338796 | orchestrator | 2025-06-22 19:59:50.338807 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-22 19:59:50.338818 | orchestrator | Sunday 22 June 2025 19:57:25 +0000 (0:00:00.690) 0:00:49.855 *********** 2025-06-22 19:59:50.338829 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.338840 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.338850 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.338861 | orchestrator | 2025-06-22 19:59:50.338872 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-22 19:59:50.338883 | orchestrator | Sunday 22 June 2025 19:57:26 +0000 (0:00:00.393) 0:00:50.248 *********** 2025-06-22 19:59:50.338894 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.338904 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.338915 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.338932 | orchestrator | 2025-06-22 19:59:50.338943 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-22 19:59:50.338954 | orchestrator | Sunday 22 June 2025 19:57:26 +0000 (0:00:00.394) 0:00:50.643 *********** 2025-06-22 19:59:50.338965 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.338976 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.338992 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.339003 | orchestrator | 2025-06-22 19:59:50.339014 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-22 19:59:50.339030 | orchestrator | Sunday 22 June 2025 19:57:26 +0000 (0:00:00.422) 0:00:51.066 *********** 2025-06-22 19:59:50.339041 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.339052 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.339063 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.339074 | orchestrator | 2025-06-22 19:59:50.339085 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-22 19:59:50.339096 | orchestrator | Sunday 22 June 2025 19:57:27 +0000 (0:00:00.616) 0:00:51.682 *********** 2025-06-22 19:59:50.339107 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.339118 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.339129 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.339140 | orchestrator | 2025-06-22 19:59:50.339151 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 19:59:50.339162 | orchestrator | Sunday 22 June 2025 19:57:27 +0000 (0:00:00.420) 0:00:52.103 *********** 2025-06-22 19:59:50.339172 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.339184 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.339284 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-22 19:59:50.339303 | orchestrator | 2025-06-22 19:59:50.339314 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-22 19:59:50.339325 | orchestrator | Sunday 22 June 2025 19:57:28 +0000 (0:00:00.352) 0:00:52.455 *********** 2025-06-22 19:59:50.339336 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.339346 | orchestrator | 2025-06-22 19:59:50.339357 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-22 19:59:50.339368 | orchestrator | Sunday 22 June 2025 19:57:38 +0000 (0:00:09.830) 0:01:02.286 *********** 2025-06-22 19:59:50.339379 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.339390 | orchestrator | 2025-06-22 19:59:50.339400 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 19:59:50.339411 | orchestrator | Sunday 22 June 2025 19:57:38 +0000 (0:00:00.122) 0:01:02.408 *********** 2025-06-22 19:59:50.339422 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.339432 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.339443 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.339453 | orchestrator | 2025-06-22 19:59:50.339464 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-22 19:59:50.339475 | orchestrator | Sunday 22 June 2025 19:57:39 +0000 (0:00:01.004) 0:01:03.413 *********** 2025-06-22 19:59:50.339486 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.339496 | orchestrator | 2025-06-22 19:59:50.339507 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-22 19:59:50.339518 | orchestrator | Sunday 22 June 2025 19:57:47 +0000 (0:00:08.163) 0:01:11.576 *********** 2025-06-22 19:59:50.339528 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.339539 | orchestrator | 2025-06-22 19:59:50.339550 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-22 19:59:50.339561 | orchestrator | Sunday 22 June 2025 19:57:48 +0000 (0:00:01.528) 0:01:13.105 *********** 2025-06-22 19:59:50.339571 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.339582 | orchestrator | 2025-06-22 19:59:50.339593 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-22 19:59:50.339604 | orchestrator | Sunday 22 June 2025 19:57:51 +0000 (0:00:02.712) 0:01:15.817 *********** 2025-06-22 19:59:50.339634 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.339644 | orchestrator | 2025-06-22 19:59:50.339654 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-22 19:59:50.339663 | orchestrator | Sunday 22 June 2025 19:57:51 +0000 (0:00:00.122) 0:01:15.939 *********** 2025-06-22 19:59:50.339673 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.339682 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.339692 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.339702 | orchestrator | 2025-06-22 19:59:50.339711 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-22 19:59:50.339721 | orchestrator | Sunday 22 June 2025 19:57:52 +0000 (0:00:00.556) 0:01:16.495 *********** 2025-06-22 19:59:50.339730 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.339740 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-22 19:59:50.339749 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.339759 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.339768 | orchestrator | 2025-06-22 19:59:50.339778 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-22 19:59:50.339787 | orchestrator | skipping: no hosts matched 2025-06-22 19:59:50.339797 | orchestrator | 2025-06-22 19:59:50.339806 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-22 19:59:50.339816 | orchestrator | 2025-06-22 19:59:50.339825 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-22 19:59:50.339835 | orchestrator | Sunday 22 June 2025 19:57:52 +0000 (0:00:00.353) 0:01:16.849 *********** 2025-06-22 19:59:50.339844 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.339854 | orchestrator | 2025-06-22 19:59:50.339863 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-22 19:59:50.339873 | orchestrator | Sunday 22 June 2025 19:58:11 +0000 (0:00:18.575) 0:01:35.424 *********** 2025-06-22 19:59:50.339883 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.339892 | orchestrator | 2025-06-22 19:59:50.339902 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-22 19:59:50.339911 | orchestrator | Sunday 22 June 2025 19:58:31 +0000 (0:00:20.650) 0:01:56.074 *********** 2025-06-22 19:59:50.339921 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.339930 | orchestrator | 2025-06-22 19:59:50.339940 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-22 19:59:50.339949 | orchestrator | 2025-06-22 19:59:50.339959 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-22 19:59:50.339969 | orchestrator | Sunday 22 June 2025 19:58:34 +0000 (0:00:02.469) 0:01:58.544 *********** 2025-06-22 19:59:50.339978 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.339988 | orchestrator | 2025-06-22 19:59:50.340002 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-22 19:59:50.340019 | orchestrator | Sunday 22 June 2025 19:58:54 +0000 (0:00:19.949) 0:02:18.493 *********** 2025-06-22 19:59:50.340029 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.340039 | orchestrator | 2025-06-22 19:59:50.340048 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-22 19:59:50.340058 | orchestrator | Sunday 22 June 2025 19:59:14 +0000 (0:00:20.532) 0:02:39.026 *********** 2025-06-22 19:59:50.340068 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.340077 | orchestrator | 2025-06-22 19:59:50.340087 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-22 19:59:50.340097 | orchestrator | 2025-06-22 19:59:50.340106 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-22 19:59:50.340116 | orchestrator | Sunday 22 June 2025 19:59:17 +0000 (0:00:02.864) 0:02:41.890 *********** 2025-06-22 19:59:50.340125 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.340135 | orchestrator | 2025-06-22 19:59:50.340145 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-22 19:59:50.340154 | orchestrator | Sunday 22 June 2025 19:59:29 +0000 (0:00:11.981) 0:02:53.872 *********** 2025-06-22 19:59:50.340170 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.340179 | orchestrator | 2025-06-22 19:59:50.340213 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-22 19:59:50.340224 | orchestrator | Sunday 22 June 2025 19:59:35 +0000 (0:00:05.561) 0:02:59.434 *********** 2025-06-22 19:59:50.340234 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.340244 | orchestrator | 2025-06-22 19:59:50.340254 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-22 19:59:50.340263 | orchestrator | 2025-06-22 19:59:50.340273 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-22 19:59:50.340282 | orchestrator | Sunday 22 June 2025 19:59:37 +0000 (0:00:02.557) 0:03:01.991 *********** 2025-06-22 19:59:50.340292 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.340302 | orchestrator | 2025-06-22 19:59:50.340311 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-22 19:59:50.340321 | orchestrator | Sunday 22 June 2025 19:59:38 +0000 (0:00:00.530) 0:03:02.521 *********** 2025-06-22 19:59:50.340330 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.340340 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.340349 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.340359 | orchestrator | 2025-06-22 19:59:50.340368 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-22 19:59:50.340378 | orchestrator | Sunday 22 June 2025 19:59:40 +0000 (0:00:02.240) 0:03:04.762 *********** 2025-06-22 19:59:50.340387 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.340397 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.340406 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.340416 | orchestrator | 2025-06-22 19:59:50.340426 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-22 19:59:50.340435 | orchestrator | Sunday 22 June 2025 19:59:42 +0000 (0:00:01.857) 0:03:06.619 *********** 2025-06-22 19:59:50.340444 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.340454 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.340463 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.340473 | orchestrator | 2025-06-22 19:59:50.340482 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-22 19:59:50.340492 | orchestrator | Sunday 22 June 2025 19:59:44 +0000 (0:00:02.152) 0:03:08.772 *********** 2025-06-22 19:59:50.340501 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.340511 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.340520 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.340530 | orchestrator | 2025-06-22 19:59:50.340539 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-22 19:59:50.340549 | orchestrator | Sunday 22 June 2025 19:59:46 +0000 (0:00:01.912) 0:03:10.684 *********** 2025-06-22 19:59:50.340558 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.340568 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.340577 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.340587 | orchestrator | 2025-06-22 19:59:50.340596 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-22 19:59:50.340606 | orchestrator | Sunday 22 June 2025 19:59:49 +0000 (0:00:02.816) 0:03:13.501 *********** 2025-06-22 19:59:50.340615 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.340625 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.340634 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.340644 | orchestrator | 2025-06-22 19:59:50.340653 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:59:50.340663 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-22 19:59:50.340673 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-22 19:59:50.340691 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-22 19:59:50.340701 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-22 19:59:50.340711 | orchestrator | 2025-06-22 19:59:50.340720 | orchestrator | 2025-06-22 19:59:50.340730 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:59:50.340740 | orchestrator | Sunday 22 June 2025 19:59:49 +0000 (0:00:00.239) 0:03:13.740 *********** 2025-06-22 19:59:50.340749 | orchestrator | =============================================================================== 2025-06-22 19:59:50.340759 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.18s 2025-06-22 19:59:50.340773 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.52s 2025-06-22 19:59:50.340788 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.98s 2025-06-22 19:59:50.340798 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.81s 2025-06-22 19:59:50.340808 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.83s 2025-06-22 19:59:50.340817 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.16s 2025-06-22 19:59:50.340827 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.56s 2025-06-22 19:59:50.340837 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.33s 2025-06-22 19:59:50.340846 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.99s 2025-06-22 19:59:50.340856 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.88s 2025-06-22 19:59:50.340865 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.79s 2025-06-22 19:59:50.340875 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.68s 2025-06-22 19:59:50.340885 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.64s 2025-06-22 19:59:50.340894 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.56s 2025-06-22 19:59:50.340904 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.49s 2025-06-22 19:59:50.340913 | orchestrator | Check MariaDB service --------------------------------------------------- 2.94s 2025-06-22 19:59:50.340923 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.82s 2025-06-22 19:59:50.340933 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.71s 2025-06-22 19:59:50.340942 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.56s 2025-06-22 19:59:50.340952 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.24s 2025-06-22 19:59:50.342277 | orchestrator | 2025-06-22 19:59:50 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:50.342650 | orchestrator | 2025-06-22 19:59:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:53.384094 | orchestrator | 2025-06-22 19:59:53 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 19:59:53.384377 | orchestrator | 2025-06-22 19:59:53 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:53.385273 | orchestrator | 2025-06-22 19:59:53 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 19:59:53.385308 | orchestrator | 2025-06-22 19:59:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:56.424886 | orchestrator | 2025-06-22 19:59:56 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 19:59:56.424983 | orchestrator | 2025-06-22 19:59:56 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:56.425147 | orchestrator | 2025-06-22 19:59:56 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 19:59:56.425167 | orchestrator | 2025-06-22 19:59:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:59.469029 | orchestrator | 2025-06-22 19:59:59 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 19:59:59.469115 | orchestrator | 2025-06-22 19:59:59 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 19:59:59.469649 | orchestrator | 2025-06-22 19:59:59 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 19:59:59.469673 | orchestrator | 2025-06-22 19:59:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:02.513799 | orchestrator | 2025-06-22 20:00:02 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:02.516096 | orchestrator | 2025-06-22 20:00:02 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:02.517746 | orchestrator | 2025-06-22 20:00:02 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:02.517775 | orchestrator | 2025-06-22 20:00:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:05.563314 | orchestrator | 2025-06-22 20:00:05 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:05.563425 | orchestrator | 2025-06-22 20:00:05 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:05.564327 | orchestrator | 2025-06-22 20:00:05 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:05.564359 | orchestrator | 2025-06-22 20:00:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:08.606646 | orchestrator | 2025-06-22 20:00:08 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:08.607382 | orchestrator | 2025-06-22 20:00:08 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:08.609092 | orchestrator | 2025-06-22 20:00:08 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:08.609132 | orchestrator | 2025-06-22 20:00:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:11.645485 | orchestrator | 2025-06-22 20:00:11 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:11.645604 | orchestrator | 2025-06-22 20:00:11 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:11.646385 | orchestrator | 2025-06-22 20:00:11 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:11.646413 | orchestrator | 2025-06-22 20:00:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:14.690095 | orchestrator | 2025-06-22 20:00:14 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:14.690952 | orchestrator | 2025-06-22 20:00:14 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:14.691907 | orchestrator | 2025-06-22 20:00:14 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:14.692115 | orchestrator | 2025-06-22 20:00:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:17.773950 | orchestrator | 2025-06-22 20:00:17 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:17.777273 | orchestrator | 2025-06-22 20:00:17 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:17.780291 | orchestrator | 2025-06-22 20:00:17 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:17.780345 | orchestrator | 2025-06-22 20:00:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:20.823702 | orchestrator | 2025-06-22 20:00:20 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:20.824073 | orchestrator | 2025-06-22 20:00:20 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:20.825631 | orchestrator | 2025-06-22 20:00:20 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:20.825657 | orchestrator | 2025-06-22 20:00:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:23.865312 | orchestrator | 2025-06-22 20:00:23 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:23.865571 | orchestrator | 2025-06-22 20:00:23 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:23.866465 | orchestrator | 2025-06-22 20:00:23 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:23.866506 | orchestrator | 2025-06-22 20:00:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:26.927600 | orchestrator | 2025-06-22 20:00:26 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:26.928297 | orchestrator | 2025-06-22 20:00:26 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:26.929378 | orchestrator | 2025-06-22 20:00:26 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:26.929862 | orchestrator | 2025-06-22 20:00:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:29.988300 | orchestrator | 2025-06-22 20:00:29 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:29.990272 | orchestrator | 2025-06-22 20:00:29 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:29.992247 | orchestrator | 2025-06-22 20:00:29 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:29.992657 | orchestrator | 2025-06-22 20:00:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:33.056850 | orchestrator | 2025-06-22 20:00:33 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:33.056956 | orchestrator | 2025-06-22 20:00:33 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:33.056972 | orchestrator | 2025-06-22 20:00:33 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:33.056985 | orchestrator | 2025-06-22 20:00:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:36.107552 | orchestrator | 2025-06-22 20:00:36 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:36.108978 | orchestrator | 2025-06-22 20:00:36 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:36.110984 | orchestrator | 2025-06-22 20:00:36 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:36.111067 | orchestrator | 2025-06-22 20:00:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:39.164496 | orchestrator | 2025-06-22 20:00:39 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:39.169709 | orchestrator | 2025-06-22 20:00:39 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:39.169739 | orchestrator | 2025-06-22 20:00:39 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:39.169779 | orchestrator | 2025-06-22 20:00:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:42.223808 | orchestrator | 2025-06-22 20:00:42 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:42.224773 | orchestrator | 2025-06-22 20:00:42 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:42.226255 | orchestrator | 2025-06-22 20:00:42 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:42.226342 | orchestrator | 2025-06-22 20:00:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:45.273265 | orchestrator | 2025-06-22 20:00:45 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:45.273708 | orchestrator | 2025-06-22 20:00:45 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:45.274800 | orchestrator | 2025-06-22 20:00:45 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:45.274855 | orchestrator | 2025-06-22 20:00:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:48.341512 | orchestrator | 2025-06-22 20:00:48 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:48.341608 | orchestrator | 2025-06-22 20:00:48 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:48.341623 | orchestrator | 2025-06-22 20:00:48 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:48.341842 | orchestrator | 2025-06-22 20:00:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:51.393642 | orchestrator | 2025-06-22 20:00:51 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:51.395263 | orchestrator | 2025-06-22 20:00:51 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:51.397467 | orchestrator | 2025-06-22 20:00:51 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:51.397516 | orchestrator | 2025-06-22 20:00:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:54.450428 | orchestrator | 2025-06-22 20:00:54 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:54.452524 | orchestrator | 2025-06-22 20:00:54 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:54.455625 | orchestrator | 2025-06-22 20:00:54 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:54.455704 | orchestrator | 2025-06-22 20:00:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:57.513274 | orchestrator | 2025-06-22 20:00:57 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:00:57.516033 | orchestrator | 2025-06-22 20:00:57 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:00:57.518134 | orchestrator | 2025-06-22 20:00:57 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:00:57.518199 | orchestrator | 2025-06-22 20:00:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:00.565579 | orchestrator | 2025-06-22 20:01:00 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:01:00.568201 | orchestrator | 2025-06-22 20:01:00 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:01:00.570660 | orchestrator | 2025-06-22 20:01:00 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:00.570684 | orchestrator | 2025-06-22 20:01:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:03.622782 | orchestrator | 2025-06-22 20:01:03 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:01:03.624036 | orchestrator | 2025-06-22 20:01:03 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:01:03.625564 | orchestrator | 2025-06-22 20:01:03 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:03.625592 | orchestrator | 2025-06-22 20:01:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:06.680550 | orchestrator | 2025-06-22 20:01:06 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:01:06.682136 | orchestrator | 2025-06-22 20:01:06 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:01:06.686786 | orchestrator | 2025-06-22 20:01:06 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:06.686865 | orchestrator | 2025-06-22 20:01:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:09.793086 | orchestrator | 2025-06-22 20:01:09 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:01:09.794439 | orchestrator | 2025-06-22 20:01:09 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:01:09.796141 | orchestrator | 2025-06-22 20:01:09 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:09.796284 | orchestrator | 2025-06-22 20:01:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:12.866635 | orchestrator | 2025-06-22 20:01:12 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:01:12.872923 | orchestrator | 2025-06-22 20:01:12 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state STARTED 2025-06-22 20:01:12.874881 | orchestrator | 2025-06-22 20:01:12 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:12.874928 | orchestrator | 2025-06-22 20:01:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:15.958886 | orchestrator | 2025-06-22 20:01:15 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:01:15.961446 | orchestrator | 2025-06-22 20:01:15 | INFO  | Task 6b9ea305-cbed-4506-8481-6c515198563b is in state SUCCESS 2025-06-22 20:01:15.964104 | orchestrator | 2025-06-22 20:01:15.964637 | orchestrator | 2025-06-22 20:01:15.964658 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-22 20:01:15.964669 | orchestrator | 2025-06-22 20:01:15.964679 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-22 20:01:15.964689 | orchestrator | Sunday 22 June 2025 19:59:06 +0000 (0:00:00.585) 0:00:00.585 *********** 2025-06-22 20:01:15.964699 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:15.964710 | orchestrator | 2025-06-22 20:01:15.964720 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-22 20:01:15.964730 | orchestrator | Sunday 22 June 2025 19:59:07 +0000 (0:00:00.613) 0:00:01.199 *********** 2025-06-22 20:01:15.964768 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.964780 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.964789 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.964855 | orchestrator | 2025-06-22 20:01:15.964866 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-22 20:01:15.964876 | orchestrator | Sunday 22 June 2025 19:59:07 +0000 (0:00:00.690) 0:00:01.889 *********** 2025-06-22 20:01:15.964886 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.964896 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.964905 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.964939 | orchestrator | 2025-06-22 20:01:15.964950 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-22 20:01:15.964960 | orchestrator | Sunday 22 June 2025 19:59:08 +0000 (0:00:00.289) 0:00:02.179 *********** 2025-06-22 20:01:15.964970 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.964979 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.965214 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.965227 | orchestrator | 2025-06-22 20:01:15.965237 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-22 20:01:15.965247 | orchestrator | Sunday 22 June 2025 19:59:09 +0000 (0:00:00.799) 0:00:02.979 *********** 2025-06-22 20:01:15.965256 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.965266 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.965275 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.965285 | orchestrator | 2025-06-22 20:01:15.965295 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-22 20:01:15.965304 | orchestrator | Sunday 22 June 2025 19:59:09 +0000 (0:00:00.330) 0:00:03.309 *********** 2025-06-22 20:01:15.965314 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.965323 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.965333 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.965342 | orchestrator | 2025-06-22 20:01:15.965353 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-22 20:01:15.965362 | orchestrator | Sunday 22 June 2025 19:59:09 +0000 (0:00:00.314) 0:00:03.624 *********** 2025-06-22 20:01:15.965372 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.965381 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.965390 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.965400 | orchestrator | 2025-06-22 20:01:15.965409 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-22 20:01:15.965420 | orchestrator | Sunday 22 June 2025 19:59:10 +0000 (0:00:00.358) 0:00:03.982 *********** 2025-06-22 20:01:15.965442 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.965453 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.965463 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.965473 | orchestrator | 2025-06-22 20:01:15.965482 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-22 20:01:15.965492 | orchestrator | Sunday 22 June 2025 19:59:10 +0000 (0:00:00.523) 0:00:04.505 *********** 2025-06-22 20:01:15.965501 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.965511 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.965520 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.965529 | orchestrator | 2025-06-22 20:01:15.965539 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-22 20:01:15.965548 | orchestrator | Sunday 22 June 2025 19:59:10 +0000 (0:00:00.304) 0:00:04.809 *********** 2025-06-22 20:01:15.965558 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:01:15.965568 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:01:15.965577 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:01:15.965586 | orchestrator | 2025-06-22 20:01:15.965596 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-22 20:01:15.965606 | orchestrator | Sunday 22 June 2025 19:59:11 +0000 (0:00:00.645) 0:00:05.454 *********** 2025-06-22 20:01:15.965616 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.965625 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.965635 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.965645 | orchestrator | 2025-06-22 20:01:15.965654 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-22 20:01:15.965664 | orchestrator | Sunday 22 June 2025 19:59:11 +0000 (0:00:00.453) 0:00:05.908 *********** 2025-06-22 20:01:15.965673 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:01:15.965683 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:01:15.965703 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:01:15.965713 | orchestrator | 2025-06-22 20:01:15.965722 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-22 20:01:15.965732 | orchestrator | Sunday 22 June 2025 19:59:14 +0000 (0:00:02.136) 0:00:08.044 *********** 2025-06-22 20:01:15.965741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 20:01:15.965751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 20:01:15.965761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 20:01:15.965771 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.965782 | orchestrator | 2025-06-22 20:01:15.965793 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-22 20:01:15.965850 | orchestrator | Sunday 22 June 2025 19:59:14 +0000 (0:00:00.409) 0:00:08.454 *********** 2025-06-22 20:01:15.965867 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.965884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.965896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.965909 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.965921 | orchestrator | 2025-06-22 20:01:15.965935 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-22 20:01:15.965947 | orchestrator | Sunday 22 June 2025 19:59:15 +0000 (0:00:00.845) 0:00:09.299 *********** 2025-06-22 20:01:15.965963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.965979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.965998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.966011 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.966082 | orchestrator | 2025-06-22 20:01:15.966096 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-22 20:01:15.966107 | orchestrator | Sunday 22 June 2025 19:59:15 +0000 (0:00:00.158) 0:00:09.457 *********** 2025-06-22 20:01:15.966121 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a7ec8831abe2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-22 19:59:12.599756', 'end': '2025-06-22 19:59:12.649699', 'delta': '0:00:00.049943', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a7ec8831abe2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-22 20:01:15.966193 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9584a88ffaf0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-22 19:59:13.387764', 'end': '2025-06-22 19:59:13.430612', 'delta': '0:00:00.042848', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9584a88ffaf0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-22 20:01:15.966255 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd7629d5bce40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-22 19:59:13.924711', 'end': '2025-06-22 19:59:13.962254', 'delta': '0:00:00.037543', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d7629d5bce40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-22 20:01:15.966268 | orchestrator | 2025-06-22 20:01:15.966279 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-22 20:01:15.966291 | orchestrator | Sunday 22 June 2025 19:59:15 +0000 (0:00:00.424) 0:00:09.882 *********** 2025-06-22 20:01:15.966302 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.966313 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.966324 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.966335 | orchestrator | 2025-06-22 20:01:15.966346 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-22 20:01:15.966357 | orchestrator | Sunday 22 June 2025 19:59:16 +0000 (0:00:00.514) 0:00:10.396 *********** 2025-06-22 20:01:15.966368 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-22 20:01:15.966378 | orchestrator | 2025-06-22 20:01:15.966390 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-22 20:01:15.966400 | orchestrator | Sunday 22 June 2025 19:59:18 +0000 (0:00:01.774) 0:00:12.171 *********** 2025-06-22 20:01:15.966411 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.966422 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.966432 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.966443 | orchestrator | 2025-06-22 20:01:15.966454 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-22 20:01:15.966465 | orchestrator | Sunday 22 June 2025 19:59:18 +0000 (0:00:00.297) 0:00:12.469 *********** 2025-06-22 20:01:15.966476 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.966487 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.966497 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.966508 | orchestrator | 2025-06-22 20:01:15.966519 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 20:01:15.966529 | orchestrator | Sunday 22 June 2025 19:59:18 +0000 (0:00:00.439) 0:00:12.908 *********** 2025-06-22 20:01:15.966540 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.966559 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.966570 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.966580 | orchestrator | 2025-06-22 20:01:15.966591 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-22 20:01:15.966602 | orchestrator | Sunday 22 June 2025 19:59:19 +0000 (0:00:00.483) 0:00:13.391 *********** 2025-06-22 20:01:15.966614 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.966624 | orchestrator | 2025-06-22 20:01:15.966635 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-22 20:01:15.966646 | orchestrator | Sunday 22 June 2025 19:59:19 +0000 (0:00:00.138) 0:00:13.530 *********** 2025-06-22 20:01:15.966657 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.966668 | orchestrator | 2025-06-22 20:01:15.966679 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 20:01:15.966690 | orchestrator | Sunday 22 June 2025 19:59:19 +0000 (0:00:00.237) 0:00:13.767 *********** 2025-06-22 20:01:15.966700 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.966711 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.966722 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.966733 | orchestrator | 2025-06-22 20:01:15.966744 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-22 20:01:15.966754 | orchestrator | Sunday 22 June 2025 19:59:20 +0000 (0:00:00.305) 0:00:14.073 *********** 2025-06-22 20:01:15.966765 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.966776 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.966787 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.966798 | orchestrator | 2025-06-22 20:01:15.966809 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-22 20:01:15.966819 | orchestrator | Sunday 22 June 2025 19:59:20 +0000 (0:00:00.319) 0:00:14.393 *********** 2025-06-22 20:01:15.966830 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.966841 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.966852 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.966862 | orchestrator | 2025-06-22 20:01:15.966874 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-22 20:01:15.966885 | orchestrator | Sunday 22 June 2025 19:59:20 +0000 (0:00:00.536) 0:00:14.929 *********** 2025-06-22 20:01:15.966895 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.966906 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.966917 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.966928 | orchestrator | 2025-06-22 20:01:15.966938 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-22 20:01:15.966949 | orchestrator | Sunday 22 June 2025 19:59:21 +0000 (0:00:00.310) 0:00:15.240 *********** 2025-06-22 20:01:15.966960 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.966971 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.966981 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.966992 | orchestrator | 2025-06-22 20:01:15.967003 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-22 20:01:15.967014 | orchestrator | Sunday 22 June 2025 19:59:21 +0000 (0:00:00.312) 0:00:15.553 *********** 2025-06-22 20:01:15.967024 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.967035 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.967046 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.967057 | orchestrator | 2025-06-22 20:01:15.967068 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-22 20:01:15.967109 | orchestrator | Sunday 22 June 2025 19:59:21 +0000 (0:00:00.331) 0:00:15.885 *********** 2025-06-22 20:01:15.967122 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.967133 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.967197 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.967212 | orchestrator | 2025-06-22 20:01:15.967223 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-22 20:01:15.967234 | orchestrator | Sunday 22 June 2025 19:59:22 +0000 (0:00:00.560) 0:00:16.446 *********** 2025-06-22 20:01:15.967350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9f4df137--04dd--5f0e--acd7--f62ec38375b4-osd--block--9f4df137--04dd--5f0e--acd7--f62ec38375b4', 'dm-uuid-LVM-oxvSQqk8CZ0BFrSC8d4e0rP8csYAErcry6XISNtmUrICsxJFjc2IQUiMGa7kUKiZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5c0aa592--9340--5775--8ceb--7aef1759a79b-osd--block--5c0aa592--9340--5775--8ceb--7aef1759a79b', 'dm-uuid-LVM-OcROyVxQoe0QyuSJnJEBbfK3G7Cr6aiJ8AkjXA4FsdJp8J9PUEQNtc2h0J3H8MYK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part1', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part14', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part15', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part16', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.967564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9f4df137--04dd--5f0e--acd7--f62ec38375b4-osd--block--9f4df137--04dd--5f0e--acd7--f62ec38375b4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-USmStc-YabB-na20-s4fV-wHCS-qr0s-vI18Xt', 'scsi-0QEMU_QEMU_HARDDISK_e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f', 'scsi-SQEMU_QEMU_HARDDISK_e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.967608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b7d3102c--a914--5a7b--b709--ad20b0d5984a-osd--block--b7d3102c--a914--5a7b--b709--ad20b0d5984a', 'dm-uuid-LVM-dqIZv4Ex6RJTpbtoxv36SxSdFHaLpNcfAdi3Iehhbv218Fm5SFYyLg2ZlD4VrsKj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5c0aa592--9340--5775--8ceb--7aef1759a79b-osd--block--5c0aa592--9340--5775--8ceb--7aef1759a79b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3DO5KQ-d07a-0vOC-ST5j-Ufhw-ysA8-DXWSNk', 'scsi-0QEMU_QEMU_HARDDISK_f3f79088-b1f4-4694-a8b0-38e1aef3e3c0', 'scsi-SQEMU_QEMU_HARDDISK_f3f79088-b1f4-4694-a8b0-38e1aef3e3c0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.967641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c557b89--2e3b--5795--aff3--9e4ccad52f24-osd--block--0c557b89--2e3b--5795--aff3--9e4ccad52f24', 'dm-uuid-LVM-oIExWIXCm0QAKVc3a25VzudhAF6eHVer2zFseVPSsqNIMqp9EdN9EH1MctfYsl6J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d', 'scsi-SQEMU_QEMU_HARDDISK_26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.967671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.967695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967767 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.967779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part1', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part14', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part15', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part16', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.967857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b7d3102c--a914--5a7b--b709--ad20b0d5984a-osd--block--b7d3102c--a914--5a7b--b709--ad20b0d5984a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RI6tee-Ctsq-b82Y-vhAs-qILk-onm9-30qwmc', 'scsi-0QEMU_QEMU_HARDDISK_a6d93ccb-5091-4fbc-bc32-8344f81d146e', 'scsi-SQEMU_QEMU_HARDDISK_a6d93ccb-5091-4fbc-bc32-8344f81d146e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.967870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0c557b89--2e3b--5795--aff3--9e4ccad52f24-osd--block--0c557b89--2e3b--5795--aff3--9e4ccad52f24'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5mbq4L-R3Q8-28jF-ju5S-NFdk-eNqv-9DpIch', 'scsi-0QEMU_QEMU_HARDDISK_4258f07d-32b4-4c40-a297-43ff401da985', 'scsi-SQEMU_QEMU_HARDDISK_4258f07d-32b4-4c40-a297-43ff401da985'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.967886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_626bb53b-03fa-4cf2-9c74-01c88e74436c', 'scsi-SQEMU_QEMU_HARDDISK_626bb53b-03fa-4cf2-9c74-01c88e74436c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.967898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--26b627d5--c9a2--5c9e--a2df--a450422a30c2-osd--block--26b627d5--c9a2--5c9e--a2df--a450422a30c2', 'dm-uuid-LVM-ruDhml2Uk5M5Hs7Cy5u1ZjJTjM3z7gZWOhMMz3cLdeNfWFfiH7KyrjJgLl3OifH3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.967927 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.967949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f64325fb--298e--5c24--b96e--fd5d866c56eb-osd--block--f64325fb--298e--5c24--b96e--fd5d866c56eb', 'dm-uuid-LVM-tgAuwQAE4RGK4uNkwQErpXJATvGxkfeGYsHnw3q9hUumzerdBk3iymo0hraEGQ0o'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.967996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.968012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.968023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.968035 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.968046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:15.968074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part1', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part14', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part15', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part16', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.968092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--26b627d5--c9a2--5c9e--a2df--a450422a30c2-osd--block--26b627d5--c9a2--5c9e--a2df--a450422a30c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9hXsyq-bQW6-HAdc-GqEn-cEDn-KEnj-P18Wfe', 'scsi-0QEMU_QEMU_HARDDISK_9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b', 'scsi-SQEMU_QEMU_HARDDISK_9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.968105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f64325fb--298e--5c24--b96e--fd5d866c56eb-osd--block--f64325fb--298e--5c24--b96e--fd5d866c56eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gtPKO0-Hy2x-8HeF-yiH2-0AlN-kFRW-3l0tKg', 'scsi-0QEMU_QEMU_HARDDISK_d64e86dd-c29d-4edc-bf55-6282aedab238', 'scsi-SQEMU_QEMU_HARDDISK_d64e86dd-c29d-4edc-bf55-6282aedab238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.968127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a34530e6-164e-4284-ba94-1682f51170e6', 'scsi-SQEMU_QEMU_HARDDISK_a34530e6-164e-4284-ba94-1682f51170e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.968172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:15.968187 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.968198 | orchestrator | 2025-06-22 20:01:15.968209 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-22 20:01:15.968220 | orchestrator | Sunday 22 June 2025 19:59:23 +0000 (0:00:00.565) 0:00:17.011 *********** 2025-06-22 20:01:15.968232 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9f4df137--04dd--5f0e--acd7--f62ec38375b4-osd--block--9f4df137--04dd--5f0e--acd7--f62ec38375b4', 'dm-uuid-LVM-oxvSQqk8CZ0BFrSC8d4e0rP8csYAErcry6XISNtmUrICsxJFjc2IQUiMGa7kUKiZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5c0aa592--9340--5775--8ceb--7aef1759a79b-osd--block--5c0aa592--9340--5775--8ceb--7aef1759a79b', 'dm-uuid-LVM-OcROyVxQoe0QyuSJnJEBbfK3G7Cr6aiJ8AkjXA4FsdJp8J9PUEQNtc2h0J3H8MYK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968261 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968273 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968311 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968335 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968363 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968381 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b7d3102c--a914--5a7b--b709--ad20b0d5984a-osd--block--b7d3102c--a914--5a7b--b709--ad20b0d5984a', 'dm-uuid-LVM-dqIZv4Ex6RJTpbtoxv36SxSdFHaLpNcfAdi3Iehhbv218Fm5SFYyLg2ZlD4VrsKj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968402 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part1', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part14', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part15', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part16', 'scsi-SQEMU_QEMU_HARDDISK_dccc5f96-71f7-47e2-8549-6be2ae231111-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c557b89--2e3b--5795--aff3--9e4ccad52f24-osd--block--0c557b89--2e3b--5795--aff3--9e4ccad52f24', 'dm-uuid-LVM-oIExWIXCm0QAKVc3a25VzudhAF6eHVer2zFseVPSsqNIMqp9EdN9EH1MctfYsl6J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968439 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9f4df137--04dd--5f0e--acd7--f62ec38375b4-osd--block--9f4df137--04dd--5f0e--acd7--f62ec38375b4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-USmStc-YabB-na20-s4fV-wHCS-qr0s-vI18Xt', 'scsi-0QEMU_QEMU_HARDDISK_e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f', 'scsi-SQEMU_QEMU_HARDDISK_e0f44a1e-7594-4bf3-80ad-ef0ec7d7da7f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5c0aa592--9340--5775--8ceb--7aef1759a79b-osd--block--5c0aa592--9340--5775--8ceb--7aef1759a79b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3DO5KQ-d07a-0vOC-ST5j-Ufhw-ysA8-DXWSNk', 'scsi-0QEMU_QEMU_HARDDISK_f3f79088-b1f4-4694-a8b0-38e1aef3e3c0', 'scsi-SQEMU_QEMU_HARDDISK_f3f79088-b1f4-4694-a8b0-38e1aef3e3c0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968470 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968481 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968497 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d', 'scsi-SQEMU_QEMU_HARDDISK_26c69c7b-6bcd-45bf-ac87-a3c483ce4b5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968527 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968538 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.968554 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968566 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968577 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968594 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968606 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968627 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--26b627d5--c9a2--5c9e--a2df--a450422a30c2-osd--block--26b627d5--c9a2--5c9e--a2df--a450422a30c2', 'dm-uuid-LVM-ruDhml2Uk5M5Hs7Cy5u1ZjJTjM3z7gZWOhMMz3cLdeNfWFfiH7KyrjJgLl3OifH3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968647 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part1', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part14', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part15', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part16', 'scsi-SQEMU_QEMU_HARDDISK_afcfd86f-9b82-44c6-98eb-03971d4f7354-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968665 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b7d3102c--a914--5a7b--b709--ad20b0d5984a-osd--block--b7d3102c--a914--5a7b--b709--ad20b0d5984a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RI6tee-Ctsq-b82Y-vhAs-qILk-onm9-30qwmc', 'scsi-0QEMU_QEMU_HARDDISK_a6d93ccb-5091-4fbc-bc32-8344f81d146e', 'scsi-SQEMU_QEMU_HARDDISK_a6d93ccb-5091-4fbc-bc32-8344f81d146e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f64325fb--298e--5c24--b96e--fd5d866c56eb-osd--block--f64325fb--298e--5c24--b96e--fd5d866c56eb', 'dm-uuid-LVM-tgAuwQAE4RGK4uNkwQErpXJATvGxkfeGYsHnw3q9hUumzerdBk3iymo0hraEGQ0o'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968701 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0c557b89--2e3b--5795--aff3--9e4ccad52f24-osd--block--0c557b89--2e3b--5795--aff3--9e4ccad52f24'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5mbq4L-R3Q8-28jF-ju5S-NFdk-eNqv-9DpIch', 'scsi-0QEMU_QEMU_HARDDISK_4258f07d-32b4-4c40-a297-43ff401da985', 'scsi-SQEMU_QEMU_HARDDISK_4258f07d-32b4-4c40-a297-43ff401da985'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968713 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_626bb53b-03fa-4cf2-9c74-01c88e74436c', 'scsi-SQEMU_QEMU_HARDDISK_626bb53b-03fa-4cf2-9c74-01c88e74436c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968725 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968741 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968758 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968770 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.968781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968797 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968821 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968832 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968857 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968877 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part1', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part14', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part15', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part16', 'scsi-SQEMU_QEMU_HARDDISK_e33bbb77-8230-4722-836e-e6cdd6981157-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968890 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--26b627d5--c9a2--5c9e--a2df--a450422a30c2-osd--block--26b627d5--c9a2--5c9e--a2df--a450422a30c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9hXsyq-bQW6-HAdc-GqEn-cEDn-KEnj-P18Wfe', 'scsi-0QEMU_QEMU_HARDDISK_9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b', 'scsi-SQEMU_QEMU_HARDDISK_9d9e86c4-e7e9-4d2c-9b29-911f2bd5eb8b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f64325fb--298e--5c24--b96e--fd5d866c56eb-osd--block--f64325fb--298e--5c24--b96e--fd5d866c56eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gtPKO0-Hy2x-8HeF-yiH2-0AlN-kFRW-3l0tKg', 'scsi-0QEMU_QEMU_HARDDISK_d64e86dd-c29d-4edc-bf55-6282aedab238', 'scsi-SQEMU_QEMU_HARDDISK_d64e86dd-c29d-4edc-bf55-6282aedab238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968925 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a34530e6-164e-4284-ba94-1682f51170e6', 'scsi-SQEMU_QEMU_HARDDISK_a34530e6-164e-4284-ba94-1682f51170e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968944 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-06-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:15.968956 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.968967 | orchestrator | 2025-06-22 20:01:15.968978 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-22 20:01:15.968989 | orchestrator | Sunday 22 June 2025 19:59:23 +0000 (0:00:00.628) 0:00:17.639 *********** 2025-06-22 20:01:15.969000 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.969011 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.969022 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.969033 | orchestrator | 2025-06-22 20:01:15.969044 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-22 20:01:15.969055 | orchestrator | Sunday 22 June 2025 19:59:24 +0000 (0:00:00.737) 0:00:18.376 *********** 2025-06-22 20:01:15.969066 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.969076 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.969087 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.969098 | orchestrator | 2025-06-22 20:01:15.969109 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 20:01:15.969120 | orchestrator | Sunday 22 June 2025 19:59:24 +0000 (0:00:00.519) 0:00:18.896 *********** 2025-06-22 20:01:15.969131 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.969176 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.969196 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.969214 | orchestrator | 2025-06-22 20:01:15.969225 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 20:01:15.969236 | orchestrator | Sunday 22 June 2025 19:59:25 +0000 (0:00:00.675) 0:00:19.572 *********** 2025-06-22 20:01:15.969247 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.969258 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.969269 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.969279 | orchestrator | 2025-06-22 20:01:15.969290 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 20:01:15.969301 | orchestrator | Sunday 22 June 2025 19:59:25 +0000 (0:00:00.291) 0:00:19.863 *********** 2025-06-22 20:01:15.969312 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.969322 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.969333 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.969344 | orchestrator | 2025-06-22 20:01:15.969354 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 20:01:15.969365 | orchestrator | Sunday 22 June 2025 19:59:26 +0000 (0:00:00.428) 0:00:20.292 *********** 2025-06-22 20:01:15.969376 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.969386 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.969397 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.969408 | orchestrator | 2025-06-22 20:01:15.969419 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-22 20:01:15.969430 | orchestrator | Sunday 22 June 2025 19:59:26 +0000 (0:00:00.579) 0:00:20.871 *********** 2025-06-22 20:01:15.969447 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-22 20:01:15.969458 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-22 20:01:15.969468 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-22 20:01:15.969479 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-22 20:01:15.969490 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-22 20:01:15.969500 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-22 20:01:15.969510 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-22 20:01:15.969521 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-22 20:01:15.969532 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-22 20:01:15.969542 | orchestrator | 2025-06-22 20:01:15.969553 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-22 20:01:15.969564 | orchestrator | Sunday 22 June 2025 19:59:27 +0000 (0:00:00.842) 0:00:21.713 *********** 2025-06-22 20:01:15.969574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 20:01:15.969585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 20:01:15.969595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 20:01:15.969606 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.969617 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-22 20:01:15.969627 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-22 20:01:15.969638 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-22 20:01:15.969649 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.969660 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-22 20:01:15.969670 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-22 20:01:15.969681 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-22 20:01:15.969692 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.969702 | orchestrator | 2025-06-22 20:01:15.969713 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-22 20:01:15.969724 | orchestrator | Sunday 22 June 2025 19:59:28 +0000 (0:00:00.358) 0:00:22.072 *********** 2025-06-22 20:01:15.969735 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:15.969753 | orchestrator | 2025-06-22 20:01:15.969764 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-22 20:01:15.969775 | orchestrator | Sunday 22 June 2025 19:59:28 +0000 (0:00:00.698) 0:00:22.771 *********** 2025-06-22 20:01:15.969786 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.969796 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.969807 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.969818 | orchestrator | 2025-06-22 20:01:15.969835 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-22 20:01:15.969847 | orchestrator | Sunday 22 June 2025 19:59:29 +0000 (0:00:00.313) 0:00:23.085 *********** 2025-06-22 20:01:15.969858 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.969869 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.969879 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.969890 | orchestrator | 2025-06-22 20:01:15.969901 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-22 20:01:15.969912 | orchestrator | Sunday 22 June 2025 19:59:29 +0000 (0:00:00.299) 0:00:23.384 *********** 2025-06-22 20:01:15.969923 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.969933 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.969944 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:15.969955 | orchestrator | 2025-06-22 20:01:15.969965 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-22 20:01:15.969976 | orchestrator | Sunday 22 June 2025 19:59:29 +0000 (0:00:00.299) 0:00:23.684 *********** 2025-06-22 20:01:15.969987 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.969998 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.970009 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.970069 | orchestrator | 2025-06-22 20:01:15.970081 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-22 20:01:15.970093 | orchestrator | Sunday 22 June 2025 19:59:30 +0000 (0:00:00.629) 0:00:24.313 *********** 2025-06-22 20:01:15.970103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:15.970114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:15.970125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:15.970135 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.970212 | orchestrator | 2025-06-22 20:01:15.970225 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-22 20:01:15.970236 | orchestrator | Sunday 22 June 2025 19:59:30 +0000 (0:00:00.375) 0:00:24.689 *********** 2025-06-22 20:01:15.970247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:15.970258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:15.970269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:15.970280 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.970291 | orchestrator | 2025-06-22 20:01:15.970301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-22 20:01:15.970312 | orchestrator | Sunday 22 June 2025 19:59:31 +0000 (0:00:00.356) 0:00:25.046 *********** 2025-06-22 20:01:15.970323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:15.970333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:15.970345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:15.970355 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.970366 | orchestrator | 2025-06-22 20:01:15.970377 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-22 20:01:15.970393 | orchestrator | Sunday 22 June 2025 19:59:31 +0000 (0:00:00.365) 0:00:25.411 *********** 2025-06-22 20:01:15.970405 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:15.970416 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:15.970435 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:15.970446 | orchestrator | 2025-06-22 20:01:15.970457 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-22 20:01:15.970468 | orchestrator | Sunday 22 June 2025 19:59:31 +0000 (0:00:00.314) 0:00:25.726 *********** 2025-06-22 20:01:15.970479 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 20:01:15.970490 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-22 20:01:15.970501 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-22 20:01:15.970512 | orchestrator | 2025-06-22 20:01:15.970523 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-22 20:01:15.970534 | orchestrator | Sunday 22 June 2025 19:59:32 +0000 (0:00:00.520) 0:00:26.247 *********** 2025-06-22 20:01:15.970544 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:01:15.970555 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:01:15.970566 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:01:15.970578 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 20:01:15.970589 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 20:01:15.970600 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 20:01:15.970611 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 20:01:15.970622 | orchestrator | 2025-06-22 20:01:15.970632 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-22 20:01:15.970643 | orchestrator | Sunday 22 June 2025 19:59:33 +0000 (0:00:00.950) 0:00:27.198 *********** 2025-06-22 20:01:15.970654 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:01:15.970665 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:01:15.970675 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:01:15.970686 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 20:01:15.970697 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 20:01:15.970706 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 20:01:15.970716 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 20:01:15.970725 | orchestrator | 2025-06-22 20:01:15.970742 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-22 20:01:15.970752 | orchestrator | Sunday 22 June 2025 19:59:35 +0000 (0:00:02.106) 0:00:29.305 *********** 2025-06-22 20:01:15.970761 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:15.970771 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:15.970781 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-22 20:01:15.970790 | orchestrator | 2025-06-22 20:01:15.970800 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-22 20:01:15.970810 | orchestrator | Sunday 22 June 2025 19:59:35 +0000 (0:00:00.395) 0:00:29.700 *********** 2025-06-22 20:01:15.970820 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:01:15.970831 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:01:15.970841 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:01:15.970858 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:01:15.970868 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:01:15.970878 | orchestrator | 2025-06-22 20:01:15.970888 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-22 20:01:15.970897 | orchestrator | Sunday 22 June 2025 20:00:19 +0000 (0:00:43.715) 0:01:13.416 *********** 2025-06-22 20:01:15.970911 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.970921 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.970931 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.970941 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.970950 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.970960 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.970970 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-22 20:01:15.970979 | orchestrator | 2025-06-22 20:01:15.970989 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-22 20:01:15.970998 | orchestrator | Sunday 22 June 2025 20:00:44 +0000 (0:00:24.942) 0:01:38.359 *********** 2025-06-22 20:01:15.971008 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.971017 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.971027 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.971037 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.971047 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.971056 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.971066 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:01:15.971075 | orchestrator | 2025-06-22 20:01:15.971085 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-22 20:01:15.971094 | orchestrator | Sunday 22 June 2025 20:00:56 +0000 (0:00:12.357) 0:01:50.716 *********** 2025-06-22 20:01:15.971104 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.971113 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:01:15.971123 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:01:15.971132 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.971161 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:01:15.971172 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:01:15.971188 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.971198 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:01:15.971214 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:01:15.971224 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.971234 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:01:15.971243 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:01:15.971252 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.971262 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:01:15.971272 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:01:15.971281 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:15.971291 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:01:15.971300 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:01:15.971310 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-22 20:01:15.971319 | orchestrator | 2025-06-22 20:01:15.971329 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:01:15.971339 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-22 20:01:15.971350 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-22 20:01:15.971360 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-22 20:01:15.971370 | orchestrator | 2025-06-22 20:01:15.971379 | orchestrator | 2025-06-22 20:01:15.971389 | orchestrator | 2025-06-22 20:01:15.971399 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:01:15.971408 | orchestrator | Sunday 22 June 2025 20:01:14 +0000 (0:00:17.685) 0:02:08.401 *********** 2025-06-22 20:01:15.971417 | orchestrator | =============================================================================== 2025-06-22 20:01:15.971427 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.72s 2025-06-22 20:01:15.971436 | orchestrator | generate keys ---------------------------------------------------------- 24.94s 2025-06-22 20:01:15.971446 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.69s 2025-06-22 20:01:15.971461 | orchestrator | get keys from monitors ------------------------------------------------- 12.36s 2025-06-22 20:01:15.971471 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.14s 2025-06-22 20:01:15.971481 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.11s 2025-06-22 20:01:15.971490 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.77s 2025-06-22 20:01:15.971500 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.95s 2025-06-22 20:01:15.971509 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.85s 2025-06-22 20:01:15.971518 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.84s 2025-06-22 20:01:15.971528 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2025-06-22 20:01:15.971537 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.74s 2025-06-22 20:01:15.971547 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2025-06-22 20:01:15.971556 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.69s 2025-06-22 20:01:15.971566 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2025-06-22 20:01:15.971575 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2025-06-22 20:01:15.971591 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.63s 2025-06-22 20:01:15.971601 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.63s 2025-06-22 20:01:15.971610 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.61s 2025-06-22 20:01:15.971620 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.58s 2025-06-22 20:01:15.971629 | orchestrator | 2025-06-22 20:01:15 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:15.971639 | orchestrator | 2025-06-22 20:01:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:19.024046 | orchestrator | 2025-06-22 20:01:19 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:01:19.025797 | orchestrator | 2025-06-22 20:01:19 | INFO  | Task 87dad40c-e20e-4f1e-b6b0-39bfcae43bb8 is in state STARTED 2025-06-22 20:01:19.027552 | orchestrator | 2025-06-22 20:01:19 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:19.027578 | orchestrator | 2025-06-22 20:01:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:22.077963 | orchestrator | 2025-06-22 20:01:22 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:01:22.078089 | orchestrator | 2025-06-22 20:01:22 | INFO  | Task 87dad40c-e20e-4f1e-b6b0-39bfcae43bb8 is in state STARTED 2025-06-22 20:01:22.080003 | orchestrator | 2025-06-22 20:01:22 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:22.080345 | orchestrator | 2025-06-22 20:01:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:25.118196 | orchestrator | 2025-06-22 20:01:25 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:01:25.119796 | orchestrator | 2025-06-22 20:01:25 | INFO  | Task 87dad40c-e20e-4f1e-b6b0-39bfcae43bb8 is in state STARTED 2025-06-22 20:01:25.121775 | orchestrator | 2025-06-22 20:01:25 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:25.122054 | orchestrator | 2025-06-22 20:01:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:28.190112 | orchestrator | 2025-06-22 20:01:28 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:01:28.193539 | orchestrator | 2025-06-22 20:01:28 | INFO  | Task 87dad40c-e20e-4f1e-b6b0-39bfcae43bb8 is in state STARTED 2025-06-22 20:01:28.195954 | orchestrator | 2025-06-22 20:01:28 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:28.195997 | orchestrator | 2025-06-22 20:01:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:31.254838 | orchestrator | 2025-06-22 20:01:31 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state STARTED 2025-06-22 20:01:31.255914 | orchestrator | 2025-06-22 20:01:31 | INFO  | Task 87dad40c-e20e-4f1e-b6b0-39bfcae43bb8 is in state STARTED 2025-06-22 20:01:31.257504 | orchestrator | 2025-06-22 20:01:31 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:31.257515 | orchestrator | 2025-06-22 20:01:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:34.303847 | orchestrator | 2025-06-22 20:01:34 | INFO  | Task fc13bca2-e4be-4ad8-9ed4-c2073d5b58c5 is in state SUCCESS 2025-06-22 20:01:34.305705 | orchestrator | 2025-06-22 20:01:34.305750 | orchestrator | 2025-06-22 20:01:34.305763 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:01:34.305775 | orchestrator | 2025-06-22 20:01:34.305803 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:01:34.305814 | orchestrator | Sunday 22 June 2025 19:59:53 +0000 (0:00:00.236) 0:00:00.236 *********** 2025-06-22 20:01:34.305849 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:34.305861 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:34.305872 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:34.305883 | orchestrator | 2025-06-22 20:01:34.305894 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:01:34.305905 | orchestrator | Sunday 22 June 2025 19:59:53 +0000 (0:00:00.254) 0:00:00.491 *********** 2025-06-22 20:01:34.305916 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-22 20:01:34.305927 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-22 20:01:34.306271 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-22 20:01:34.306295 | orchestrator | 2025-06-22 20:01:34.306306 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-22 20:01:34.306317 | orchestrator | 2025-06-22 20:01:34.306328 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:01:34.306339 | orchestrator | Sunday 22 June 2025 19:59:54 +0000 (0:00:00.373) 0:00:00.864 *********** 2025-06-22 20:01:34.306350 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:34.306363 | orchestrator | 2025-06-22 20:01:34.306374 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-22 20:01:34.306385 | orchestrator | Sunday 22 June 2025 19:59:54 +0000 (0:00:00.453) 0:00:01.317 *********** 2025-06-22 20:01:34.306403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:01:34.306444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:01:34.306472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:01:34.306484 | orchestrator | 2025-06-22 20:01:34.306502 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-22 20:01:34.306514 | orchestrator | Sunday 22 June 2025 19:59:55 +0000 (0:00:00.921) 0:00:02.239 *********** 2025-06-22 20:01:34.306525 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:34.306535 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:34.306546 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:34.306557 | orchestrator | 2025-06-22 20:01:34.306568 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:01:34.306580 | orchestrator | Sunday 22 June 2025 19:59:56 +0000 (0:00:00.401) 0:00:02.640 *********** 2025-06-22 20:01:34.306598 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-22 20:01:34.306610 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-22 20:01:34.306632 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-22 20:01:34.306652 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-22 20:01:34.306663 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-22 20:01:34.306674 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-22 20:01:34.306685 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-22 20:01:34.306695 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-22 20:01:34.306706 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-22 20:01:34.306716 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-22 20:01:34.306727 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-22 20:01:34.306737 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-22 20:01:34.306748 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-22 20:01:34.306759 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-22 20:01:34.306769 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-22 20:01:34.306780 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-22 20:01:34.306790 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-22 20:01:34.306801 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-22 20:01:34.306812 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-22 20:01:34.306822 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-22 20:01:34.306833 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-22 20:01:34.306843 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-22 20:01:34.306854 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-22 20:01:34.306865 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-22 20:01:34.306879 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-22 20:01:34.306894 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-22 20:01:34.306906 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-22 20:01:34.306918 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-22 20:01:34.306938 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-22 20:01:34.306950 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-22 20:01:34.306963 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-22 20:01:34.306975 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-22 20:01:34.306987 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-22 20:01:34.307000 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-22 20:01:34.307014 | orchestrator | 2025-06-22 20:01:34.307026 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:01:34.307039 | orchestrator | Sunday 22 June 2025 19:59:56 +0000 (0:00:00.705) 0:00:03.345 *********** 2025-06-22 20:01:34.307051 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:34.307064 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:34.307076 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:34.307089 | orchestrator | 2025-06-22 20:01:34.307100 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:01:34.307113 | orchestrator | Sunday 22 June 2025 19:59:57 +0000 (0:00:00.311) 0:00:03.657 *********** 2025-06-22 20:01:34.307131 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.307168 | orchestrator | 2025-06-22 20:01:34.307180 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:01:34.307196 | orchestrator | Sunday 22 June 2025 19:59:57 +0000 (0:00:00.115) 0:00:03.773 *********** 2025-06-22 20:01:34.307207 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.307218 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.307229 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.307239 | orchestrator | 2025-06-22 20:01:34.307250 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:01:34.307261 | orchestrator | Sunday 22 June 2025 19:59:57 +0000 (0:00:00.510) 0:00:04.283 *********** 2025-06-22 20:01:34.307272 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:34.307282 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:34.307293 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:34.307304 | orchestrator | 2025-06-22 20:01:34.307314 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:01:34.307325 | orchestrator | Sunday 22 June 2025 19:59:58 +0000 (0:00:00.307) 0:00:04.591 *********** 2025-06-22 20:01:34.307336 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.307347 | orchestrator | 2025-06-22 20:01:34.307357 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:01:34.307368 | orchestrator | Sunday 22 June 2025 19:59:58 +0000 (0:00:00.133) 0:00:04.724 *********** 2025-06-22 20:01:34.307378 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.307389 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.307400 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.307410 | orchestrator | 2025-06-22 20:01:34.307421 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:01:34.307432 | orchestrator | Sunday 22 June 2025 19:59:58 +0000 (0:00:00.273) 0:00:04.998 *********** 2025-06-22 20:01:34.307442 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:34.307453 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:34.307464 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:34.307481 | orchestrator | 2025-06-22 20:01:34.307492 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:01:34.307503 | orchestrator | Sunday 22 June 2025 19:59:58 +0000 (0:00:00.259) 0:00:05.257 *********** 2025-06-22 20:01:34.307513 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.307524 | orchestrator | 2025-06-22 20:01:34.307624 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:01:34.307657 | orchestrator | Sunday 22 June 2025 19:59:58 +0000 (0:00:00.280) 0:00:05.537 *********** 2025-06-22 20:01:34.307681 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.307692 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.307703 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.307714 | orchestrator | 2025-06-22 20:01:34.307725 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:01:34.307736 | orchestrator | Sunday 22 June 2025 19:59:59 +0000 (0:00:00.298) 0:00:05.836 *********** 2025-06-22 20:01:34.307747 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:34.307758 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:34.307769 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:34.307779 | orchestrator | 2025-06-22 20:01:34.307790 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:01:34.307801 | orchestrator | Sunday 22 June 2025 19:59:59 +0000 (0:00:00.291) 0:00:06.127 *********** 2025-06-22 20:01:34.307812 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.307822 | orchestrator | 2025-06-22 20:01:34.307833 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:01:34.307844 | orchestrator | Sunday 22 June 2025 19:59:59 +0000 (0:00:00.112) 0:00:06.240 *********** 2025-06-22 20:01:34.307855 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.307865 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.307876 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.307887 | orchestrator | 2025-06-22 20:01:34.307898 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:01:34.307909 | orchestrator | Sunday 22 June 2025 19:59:59 +0000 (0:00:00.253) 0:00:06.494 *********** 2025-06-22 20:01:34.307920 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:34.307931 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:34.307941 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:34.307952 | orchestrator | 2025-06-22 20:01:34.307963 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:01:34.307974 | orchestrator | Sunday 22 June 2025 20:00:00 +0000 (0:00:00.426) 0:00:06.920 *********** 2025-06-22 20:01:34.307985 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.307996 | orchestrator | 2025-06-22 20:01:34.308007 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:01:34.308017 | orchestrator | Sunday 22 June 2025 20:00:00 +0000 (0:00:00.117) 0:00:07.038 *********** 2025-06-22 20:01:34.308028 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.308039 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.308049 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.308060 | orchestrator | 2025-06-22 20:01:34.308072 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:01:34.308090 | orchestrator | Sunday 22 June 2025 20:00:00 +0000 (0:00:00.277) 0:00:07.315 *********** 2025-06-22 20:01:34.308110 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:34.308127 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:34.308188 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:34.308208 | orchestrator | 2025-06-22 20:01:34.308227 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:01:34.308244 | orchestrator | Sunday 22 June 2025 20:00:01 +0000 (0:00:00.287) 0:00:07.603 *********** 2025-06-22 20:01:34.308255 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.308266 | orchestrator | 2025-06-22 20:01:34.308277 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:01:34.308288 | orchestrator | Sunday 22 June 2025 20:00:01 +0000 (0:00:00.101) 0:00:07.705 *********** 2025-06-22 20:01:34.308308 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.308319 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.308330 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.308341 | orchestrator | 2025-06-22 20:01:34.308351 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:01:34.308372 | orchestrator | Sunday 22 June 2025 20:00:01 +0000 (0:00:00.380) 0:00:08.085 *********** 2025-06-22 20:01:34.308383 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:34.308394 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:34.308411 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:34.308422 | orchestrator | 2025-06-22 20:01:34.308433 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:01:34.308444 | orchestrator | Sunday 22 June 2025 20:00:01 +0000 (0:00:00.292) 0:00:08.377 *********** 2025-06-22 20:01:34.308455 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.308466 | orchestrator | 2025-06-22 20:01:34.308477 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:01:34.308488 | orchestrator | Sunday 22 June 2025 20:00:01 +0000 (0:00:00.113) 0:00:08.491 *********** 2025-06-22 20:01:34.308499 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.308509 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.308520 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.308531 | orchestrator | 2025-06-22 20:01:34.308541 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:01:34.308552 | orchestrator | Sunday 22 June 2025 20:00:02 +0000 (0:00:00.256) 0:00:08.748 *********** 2025-06-22 20:01:34.308563 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:34.308573 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:34.308584 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:34.308595 | orchestrator | 2025-06-22 20:01:34.308606 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:01:34.308616 | orchestrator | Sunday 22 June 2025 20:00:02 +0000 (0:00:00.289) 0:00:09.038 *********** 2025-06-22 20:01:34.308627 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.308638 | orchestrator | 2025-06-22 20:01:34.308648 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:01:34.308659 | orchestrator | Sunday 22 June 2025 20:00:02 +0000 (0:00:00.126) 0:00:09.165 *********** 2025-06-22 20:01:34.308670 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.308681 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.308691 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.308702 | orchestrator | 2025-06-22 20:01:34.308713 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:01:34.308723 | orchestrator | Sunday 22 June 2025 20:00:03 +0000 (0:00:00.563) 0:00:09.728 *********** 2025-06-22 20:01:34.308734 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:34.308745 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:34.308755 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:34.308766 | orchestrator | 2025-06-22 20:01:34.308777 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:01:34.308788 | orchestrator | Sunday 22 June 2025 20:00:03 +0000 (0:00:00.331) 0:00:10.059 *********** 2025-06-22 20:01:34.308799 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.308809 | orchestrator | 2025-06-22 20:01:34.308820 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:01:34.308831 | orchestrator | Sunday 22 June 2025 20:00:03 +0000 (0:00:00.128) 0:00:10.187 *********** 2025-06-22 20:01:34.308842 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.308867 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.308879 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.308889 | orchestrator | 2025-06-22 20:01:34.308900 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:01:34.308911 | orchestrator | Sunday 22 June 2025 20:00:03 +0000 (0:00:00.307) 0:00:10.495 *********** 2025-06-22 20:01:34.308939 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:34.308950 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:34.308961 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:34.308972 | orchestrator | 2025-06-22 20:01:34.308983 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:01:34.308994 | orchestrator | Sunday 22 June 2025 20:00:04 +0000 (0:00:00.572) 0:00:11.067 *********** 2025-06-22 20:01:34.309004 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.309015 | orchestrator | 2025-06-22 20:01:34.309026 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:01:34.309037 | orchestrator | Sunday 22 June 2025 20:00:04 +0000 (0:00:00.137) 0:00:11.205 *********** 2025-06-22 20:01:34.309048 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.309059 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.309069 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.309080 | orchestrator | 2025-06-22 20:01:34.309091 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-22 20:01:34.309102 | orchestrator | Sunday 22 June 2025 20:00:04 +0000 (0:00:00.307) 0:00:11.512 *********** 2025-06-22 20:01:34.309112 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:34.309123 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:34.309211 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:34.309227 | orchestrator | 2025-06-22 20:01:34.309238 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-22 20:01:34.309249 | orchestrator | Sunday 22 June 2025 20:00:06 +0000 (0:00:01.646) 0:00:13.159 *********** 2025-06-22 20:01:34.309260 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-22 20:01:34.309271 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-22 20:01:34.309282 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-22 20:01:34.309292 | orchestrator | 2025-06-22 20:01:34.309303 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-22 20:01:34.309314 | orchestrator | Sunday 22 June 2025 20:00:08 +0000 (0:00:01.671) 0:00:14.830 *********** 2025-06-22 20:01:34.309325 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-22 20:01:34.309336 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-22 20:01:34.309347 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-22 20:01:34.309358 | orchestrator | 2025-06-22 20:01:34.309376 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-22 20:01:34.309388 | orchestrator | Sunday 22 June 2025 20:00:10 +0000 (0:00:01.843) 0:00:16.673 *********** 2025-06-22 20:01:34.309404 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-22 20:01:34.309416 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-22 20:01:34.309427 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-22 20:01:34.309438 | orchestrator | 2025-06-22 20:01:34.309455 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-22 20:01:34.309470 | orchestrator | Sunday 22 June 2025 20:00:11 +0000 (0:00:01.554) 0:00:18.228 *********** 2025-06-22 20:01:34.309481 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.309492 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.309503 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.309513 | orchestrator | 2025-06-22 20:01:34.309524 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-22 20:01:34.309535 | orchestrator | Sunday 22 June 2025 20:00:11 +0000 (0:00:00.288) 0:00:18.516 *********** 2025-06-22 20:01:34.309546 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.309568 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.309579 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.309590 | orchestrator | 2025-06-22 20:01:34.309721 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:01:34.309741 | orchestrator | Sunday 22 June 2025 20:00:12 +0000 (0:00:00.276) 0:00:18.793 *********** 2025-06-22 20:01:34.309757 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:34.309767 | orchestrator | 2025-06-22 20:01:34.309777 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-22 20:01:34.309786 | orchestrator | Sunday 22 June 2025 20:00:12 +0000 (0:00:00.738) 0:00:19.532 *********** 2025-06-22 20:01:34.309851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:01:34.309882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:01:34.309902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:01:34.309914 | orchestrator | 2025-06-22 20:01:34.309924 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-22 20:01:34.309934 | orchestrator | Sunday 22 June 2025 20:00:14 +0000 (0:00:01.349) 0:00:20.882 *********** 2025-06-22 20:01:34.309957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:01:34.309975 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.309997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:01:34.310009 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.310049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:01:34.310069 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.310078 | orchestrator | 2025-06-22 20:01:34.310088 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-22 20:01:34.310098 | orchestrator | Sunday 22 June 2025 20:00:14 +0000 (0:00:00.648) 0:00:21.530 *********** 2025-06-22 20:01:34.310122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:01:34.310163 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.310183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:01:34.310201 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.310237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:01:34.310257 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.310267 | orchestrator | 2025-06-22 20:01:34.310277 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-22 20:01:34.310287 | orchestrator | Sunday 22 June 2025 20:00:16 +0000 (0:00:01.169) 0:00:22.700 *********** 2025-06-22 20:01:34.310297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:01:34.310322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:01:34.310340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:01:34.310351 | orchestrator | 2025-06-22 20:01:34.310361 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:01:34.310370 | orchestrator | Sunday 22 June 2025 20:00:17 +0000 (0:00:01.490) 0:00:24.190 *********** 2025-06-22 20:01:34.310380 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:34.310398 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:34.310408 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:34.310417 | orchestrator | 2025-06-22 20:01:34.310427 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:01:34.310442 | orchestrator | Sunday 22 June 2025 20:00:17 +0000 (0:00:00.294) 0:00:24.485 *********** 2025-06-22 20:01:34.310453 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:34.310463 | orchestrator | 2025-06-22 20:01:34.310478 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-22 20:01:34.310489 | orchestrator | Sunday 22 June 2025 20:00:18 +0000 (0:00:00.778) 0:00:25.264 *********** 2025-06-22 20:01:34.310500 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:34.310511 | orchestrator | 2025-06-22 20:01:34.310522 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-22 20:01:34.310533 | orchestrator | Sunday 22 June 2025 20:00:20 +0000 (0:00:02.132) 0:00:27.396 *********** 2025-06-22 20:01:34.310545 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:34.310556 | orchestrator | 2025-06-22 20:01:34.310567 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-22 20:01:34.310578 | orchestrator | Sunday 22 June 2025 20:00:22 +0000 (0:00:01.939) 0:00:29.336 *********** 2025-06-22 20:01:34.310589 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:34.310600 | orchestrator | 2025-06-22 20:01:34.310611 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-22 20:01:34.310623 | orchestrator | Sunday 22 June 2025 20:00:38 +0000 (0:00:15.778) 0:00:45.114 *********** 2025-06-22 20:01:34.310634 | orchestrator | 2025-06-22 20:01:34.310645 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-22 20:01:34.310657 | orchestrator | Sunday 22 June 2025 20:00:38 +0000 (0:00:00.065) 0:00:45.179 *********** 2025-06-22 20:01:34.310667 | orchestrator | 2025-06-22 20:01:34.310679 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-22 20:01:34.310689 | orchestrator | Sunday 22 June 2025 20:00:38 +0000 (0:00:00.064) 0:00:45.244 *********** 2025-06-22 20:01:34.310700 | orchestrator | 2025-06-22 20:01:34.310712 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-22 20:01:34.310723 | orchestrator | Sunday 22 June 2025 20:00:38 +0000 (0:00:00.065) 0:00:45.310 *********** 2025-06-22 20:01:34.310734 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:34.310745 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:34.310755 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:34.310766 | orchestrator | 2025-06-22 20:01:34.310777 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:01:34.310790 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-22 20:01:34.310801 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-22 20:01:34.310813 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-22 20:01:34.310824 | orchestrator | 2025-06-22 20:01:34.310835 | orchestrator | 2025-06-22 20:01:34.310846 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:01:34.310855 | orchestrator | Sunday 22 June 2025 20:01:33 +0000 (0:00:54.664) 0:01:39.974 *********** 2025-06-22 20:01:34.310865 | orchestrator | =============================================================================== 2025-06-22 20:01:34.310874 | orchestrator | horizon : Restart horizon container ------------------------------------ 54.66s 2025-06-22 20:01:34.310883 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.78s 2025-06-22 20:01:34.310893 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.13s 2025-06-22 20:01:34.310912 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 1.94s 2025-06-22 20:01:34.310922 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.84s 2025-06-22 20:01:34.310931 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.67s 2025-06-22 20:01:34.310941 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.65s 2025-06-22 20:01:34.310950 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.55s 2025-06-22 20:01:34.310960 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.49s 2025-06-22 20:01:34.310970 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.35s 2025-06-22 20:01:34.310979 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.17s 2025-06-22 20:01:34.310989 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.92s 2025-06-22 20:01:34.310998 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2025-06-22 20:01:34.311008 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-06-22 20:01:34.311018 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2025-06-22 20:01:34.311027 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2025-06-22 20:01:34.311037 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2025-06-22 20:01:34.311046 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2025-06-22 20:01:34.311056 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-06-22 20:01:34.311065 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.45s 2025-06-22 20:01:34.311075 | orchestrator | 2025-06-22 20:01:34 | INFO  | Task 87dad40c-e20e-4f1e-b6b0-39bfcae43bb8 is in state STARTED 2025-06-22 20:01:34.311089 | orchestrator | 2025-06-22 20:01:34 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:34.311103 | orchestrator | 2025-06-22 20:01:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:37.364824 | orchestrator | 2025-06-22 20:01:37 | INFO  | Task 87dad40c-e20e-4f1e-b6b0-39bfcae43bb8 is in state STARTED 2025-06-22 20:01:37.365007 | orchestrator | 2025-06-22 20:01:37 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:37.365028 | orchestrator | 2025-06-22 20:01:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:40.417497 | orchestrator | 2025-06-22 20:01:40 | INFO  | Task 87dad40c-e20e-4f1e-b6b0-39bfcae43bb8 is in state STARTED 2025-06-22 20:01:40.418402 | orchestrator | 2025-06-22 20:01:40 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:40.418427 | orchestrator | 2025-06-22 20:01:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:43.479535 | orchestrator | 2025-06-22 20:01:43 | INFO  | Task 87dad40c-e20e-4f1e-b6b0-39bfcae43bb8 is in state STARTED 2025-06-22 20:01:43.481481 | orchestrator | 2025-06-22 20:01:43 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:43.481523 | orchestrator | 2025-06-22 20:01:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:46.533958 | orchestrator | 2025-06-22 20:01:46 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:01:46.535094 | orchestrator | 2025-06-22 20:01:46 | INFO  | Task 87dad40c-e20e-4f1e-b6b0-39bfcae43bb8 is in state SUCCESS 2025-06-22 20:01:46.537914 | orchestrator | 2025-06-22 20:01:46 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:46.538406 | orchestrator | 2025-06-22 20:01:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:49.603069 | orchestrator | 2025-06-22 20:01:49 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:01:49.605555 | orchestrator | 2025-06-22 20:01:49 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:49.605712 | orchestrator | 2025-06-22 20:01:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:52.655693 | orchestrator | 2025-06-22 20:01:52 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:01:52.658339 | orchestrator | 2025-06-22 20:01:52 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:52.658376 | orchestrator | 2025-06-22 20:01:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:55.714199 | orchestrator | 2025-06-22 20:01:55 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:01:55.715299 | orchestrator | 2025-06-22 20:01:55 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:55.715312 | orchestrator | 2025-06-22 20:01:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:58.772928 | orchestrator | 2025-06-22 20:01:58 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:01:58.776584 | orchestrator | 2025-06-22 20:01:58 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:01:58.776776 | orchestrator | 2025-06-22 20:01:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:01.822345 | orchestrator | 2025-06-22 20:02:01 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:01.823837 | orchestrator | 2025-06-22 20:02:01 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:02:01.824033 | orchestrator | 2025-06-22 20:02:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:04.869696 | orchestrator | 2025-06-22 20:02:04 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:04.872325 | orchestrator | 2025-06-22 20:02:04 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:02:04.872364 | orchestrator | 2025-06-22 20:02:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:07.925656 | orchestrator | 2025-06-22 20:02:07 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:07.927501 | orchestrator | 2025-06-22 20:02:07 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:02:07.927532 | orchestrator | 2025-06-22 20:02:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:10.973493 | orchestrator | 2025-06-22 20:02:10 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:10.975166 | orchestrator | 2025-06-22 20:02:10 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:02:10.975225 | orchestrator | 2025-06-22 20:02:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:14.025603 | orchestrator | 2025-06-22 20:02:14 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:14.027352 | orchestrator | 2025-06-22 20:02:14 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:02:14.027376 | orchestrator | 2025-06-22 20:02:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:17.072736 | orchestrator | 2025-06-22 20:02:17 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:17.074982 | orchestrator | 2025-06-22 20:02:17 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:02:17.075158 | orchestrator | 2025-06-22 20:02:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:20.125303 | orchestrator | 2025-06-22 20:02:20 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:20.125924 | orchestrator | 2025-06-22 20:02:20 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:02:20.126065 | orchestrator | 2025-06-22 20:02:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:23.171697 | orchestrator | 2025-06-22 20:02:23 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:23.173756 | orchestrator | 2025-06-22 20:02:23 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:02:23.174197 | orchestrator | 2025-06-22 20:02:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:26.221407 | orchestrator | 2025-06-22 20:02:26 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:26.223273 | orchestrator | 2025-06-22 20:02:26 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state STARTED 2025-06-22 20:02:26.223342 | orchestrator | 2025-06-22 20:02:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:29.261474 | orchestrator | 2025-06-22 20:02:29 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:02:29.262962 | orchestrator | 2025-06-22 20:02:29 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:29.263994 | orchestrator | 2025-06-22 20:02:29 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:02:29.264026 | orchestrator | 2025-06-22 20:02:29 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:02:29.264551 | orchestrator | 2025-06-22 20:02:29 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:02:29.266298 | orchestrator | 2025-06-22 20:02:29 | INFO  | Task 0487368c-ceeb-4311-856f-12dff5319f50 is in state SUCCESS 2025-06-22 20:02:29.267727 | orchestrator | 2025-06-22 20:02:29.267822 | orchestrator | 2025-06-22 20:02:29.267840 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-22 20:02:29.267853 | orchestrator | 2025-06-22 20:02:29.267864 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-22 20:02:29.267876 | orchestrator | Sunday 22 June 2025 20:01:19 +0000 (0:00:00.166) 0:00:00.166 *********** 2025-06-22 20:02:29.267939 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-22 20:02:29.268824 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-22 20:02:29.268844 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-22 20:02:29.268855 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:02:29.268866 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-22 20:02:29.268877 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-22 20:02:29.268888 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-22 20:02:29.268899 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-22 20:02:29.268910 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-22 20:02:29.268921 | orchestrator | 2025-06-22 20:02:29.268933 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-22 20:02:29.268967 | orchestrator | Sunday 22 June 2025 20:01:23 +0000 (0:00:04.106) 0:00:04.272 *********** 2025-06-22 20:02:29.268979 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 20:02:29.268991 | orchestrator | 2025-06-22 20:02:29.269002 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-22 20:02:29.269013 | orchestrator | Sunday 22 June 2025 20:01:24 +0000 (0:00:01.017) 0:00:05.290 *********** 2025-06-22 20:02:29.269029 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-22 20:02:29.269041 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-22 20:02:29.269051 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-22 20:02:29.269062 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:02:29.269075 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-22 20:02:29.269086 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-22 20:02:29.269097 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-22 20:02:29.269108 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-22 20:02:29.269119 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-22 20:02:29.269152 | orchestrator | 2025-06-22 20:02:29.269165 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-22 20:02:29.269176 | orchestrator | Sunday 22 June 2025 20:01:38 +0000 (0:00:13.269) 0:00:18.560 *********** 2025-06-22 20:02:29.269187 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-22 20:02:29.269198 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-22 20:02:29.269209 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-22 20:02:29.269220 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:02:29.269230 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-22 20:02:29.269241 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-22 20:02:29.269252 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-22 20:02:29.269263 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-22 20:02:29.269273 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-22 20:02:29.269284 | orchestrator | 2025-06-22 20:02:29.269295 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:02:29.269306 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:02:29.269318 | orchestrator | 2025-06-22 20:02:29.269329 | orchestrator | 2025-06-22 20:02:29.269339 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:02:29.269350 | orchestrator | Sunday 22 June 2025 20:01:44 +0000 (0:00:06.815) 0:00:25.375 *********** 2025-06-22 20:02:29.269361 | orchestrator | =============================================================================== 2025-06-22 20:02:29.269372 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.27s 2025-06-22 20:02:29.269383 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.82s 2025-06-22 20:02:29.269394 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.11s 2025-06-22 20:02:29.269405 | orchestrator | Create share directory -------------------------------------------------- 1.02s 2025-06-22 20:02:29.269416 | orchestrator | 2025-06-22 20:02:29.269429 | orchestrator | 2025-06-22 20:02:29.269441 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:02:29.269453 | orchestrator | 2025-06-22 20:02:29.269506 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:02:29.269529 | orchestrator | Sunday 22 June 2025 19:59:53 +0000 (0:00:00.291) 0:00:00.291 *********** 2025-06-22 20:02:29.269542 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:29.269554 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:29.269567 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:29.269579 | orchestrator | 2025-06-22 20:02:29.269592 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:02:29.269604 | orchestrator | Sunday 22 June 2025 19:59:54 +0000 (0:00:00.254) 0:00:00.546 *********** 2025-06-22 20:02:29.269617 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-22 20:02:29.269629 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-22 20:02:29.269642 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-22 20:02:29.269654 | orchestrator | 2025-06-22 20:02:29.269667 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-22 20:02:29.269679 | orchestrator | 2025-06-22 20:02:29.269691 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:02:29.269704 | orchestrator | Sunday 22 June 2025 19:59:54 +0000 (0:00:00.388) 0:00:00.934 *********** 2025-06-22 20:02:29.269716 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:29.269729 | orchestrator | 2025-06-22 20:02:29.269741 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-22 20:02:29.269754 | orchestrator | Sunday 22 June 2025 19:59:55 +0000 (0:00:00.511) 0:00:01.446 *********** 2025-06-22 20:02:29.269777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.269795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.269835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.269856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:02:29.269870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:02:29.269886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:02:29.269898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.269910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.269921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.269939 | orchestrator | 2025-06-22 20:02:29.269950 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-22 20:02:29.269962 | orchestrator | Sunday 22 June 2025 19:59:56 +0000 (0:00:01.561) 0:00:03.007 *********** 2025-06-22 20:02:29.269978 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-22 20:02:29.269990 | orchestrator | 2025-06-22 20:02:29.270001 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-22 20:02:29.270012 | orchestrator | Sunday 22 June 2025 19:59:57 +0000 (0:00:00.939) 0:00:03.946 *********** 2025-06-22 20:02:29.270084 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:29.270097 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:29.270108 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:29.270119 | orchestrator | 2025-06-22 20:02:29.270165 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-22 20:02:29.270188 | orchestrator | Sunday 22 June 2025 19:59:58 +0000 (0:00:00.499) 0:00:04.446 *********** 2025-06-22 20:02:29.270206 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:02:29.270220 | orchestrator | 2025-06-22 20:02:29.270231 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:02:29.270242 | orchestrator | Sunday 22 June 2025 19:59:58 +0000 (0:00:00.648) 0:00:05.094 *********** 2025-06-22 20:02:29.270253 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:29.270264 | orchestrator | 2025-06-22 20:02:29.270275 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-22 20:02:29.270286 | orchestrator | Sunday 22 June 2025 19:59:59 +0000 (0:00:00.464) 0:00:05.558 *********** 2025-06-22 20:02:29.270304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.270317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.270339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.270362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:02:29.270375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:02:29.270392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:02:29.270403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.270421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.270433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.270444 | orchestrator | 2025-06-22 20:02:29.270456 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-22 20:02:29.270467 | orchestrator | Sunday 22 June 2025 20:00:02 +0000 (0:00:02.941) 0:00:08.500 *********** 2025-06-22 20:02:29.270488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:02:29.270501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:02:29.270518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:02:29.270530 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.270542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:02:29.270560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:02:29.270578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:02:29.270590 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:29.270602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:02:29.270619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:02:29.270640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:02:29.270652 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:29.270663 | orchestrator | 2025-06-22 20:02:29.270674 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-22 20:02:29.270685 | orchestrator | Sunday 22 June 2025 20:00:02 +0000 (0:00:00.493) 0:00:08.994 *********** 2025-06-22 20:02:29.270697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:02:29.270717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:02:29.270729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:02:29.270741 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.270757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:02:29.270775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:02:29.270787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:02:29.270806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localti2025-06-22 20:02:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:29.270819 | orchestrator | me:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:02:29.270832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:02:29.270843 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:29.270867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:02:29.270885 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:29.270896 | orchestrator | 2025-06-22 20:02:29.270907 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-22 20:02:29.270918 | orchestrator | Sunday 22 June 2025 20:00:03 +0000 (0:00:00.736) 0:00:09.730 *********** 2025-06-22 20:02:29.270929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.270942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.270962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.270975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:02:29.270997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:02:29.271009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:02:29.271021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.271033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.271051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.271063 | orchestrator | 2025-06-22 20:02:29.271074 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-22 20:02:29.271086 | orchestrator | Sunday 22 June 2025 20:00:06 +0000 (0:00:03.523) 0:00:13.254 *********** 2025-06-22 20:02:29.271102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.271121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:02:29.271185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.271200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:02:29.271220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.271240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:02:29.271258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.271269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.271281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.271292 | orchestrator | 2025-06-22 20:02:29.271304 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-22 20:02:29.271315 | orchestrator | Sunday 22 June 2025 20:00:11 +0000 (0:00:04.579) 0:00:17.833 *********** 2025-06-22 20:02:29.271326 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:29.271337 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:29.271348 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:29.271359 | orchestrator | 2025-06-22 20:02:29.271370 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-22 20:02:29.271381 | orchestrator | Sunday 22 June 2025 20:00:12 +0000 (0:00:01.191) 0:00:19.025 *********** 2025-06-22 20:02:29.271392 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.271403 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:29.271414 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:29.271425 | orchestrator | 2025-06-22 20:02:29.271442 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-22 20:02:29.271454 | orchestrator | Sunday 22 June 2025 20:00:13 +0000 (0:00:00.474) 0:00:19.499 *********** 2025-06-22 20:02:29.271465 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.271476 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:29.271487 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:29.271498 | orchestrator | 2025-06-22 20:02:29.271509 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-22 20:02:29.271525 | orchestrator | Sunday 22 June 2025 20:00:13 +0000 (0:00:00.515) 0:00:20.015 *********** 2025-06-22 20:02:29.271537 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.271548 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:29.271559 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:29.271570 | orchestrator | 2025-06-22 20:02:29.271581 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-22 20:02:29.271592 | orchestrator | Sunday 22 June 2025 20:00:13 +0000 (0:00:00.317) 0:00:20.332 *********** 2025-06-22 20:02:29.271604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.271616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:02:29.271629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.271725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:02:29.271758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.271775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:02:29.271790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.271801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.271811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.271821 | orchestrator | 2025-06-22 20:02:29.271831 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:02:29.271841 | orchestrator | Sunday 22 June 2025 20:00:16 +0000 (0:00:02.165) 0:00:22.498 *********** 2025-06-22 20:02:29.271850 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.271860 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:29.271870 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:29.271885 | orchestrator | 2025-06-22 20:02:29.271895 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-22 20:02:29.271904 | orchestrator | Sunday 22 June 2025 20:00:16 +0000 (0:00:00.303) 0:00:22.802 *********** 2025-06-22 20:02:29.271914 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-22 20:02:29.271924 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-22 20:02:29.271938 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-22 20:02:29.271949 | orchestrator | 2025-06-22 20:02:29.271958 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-22 20:02:29.271968 | orchestrator | Sunday 22 June 2025 20:00:18 +0000 (0:00:02.206) 0:00:25.009 *********** 2025-06-22 20:02:29.271977 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:02:29.271987 | orchestrator | 2025-06-22 20:02:29.271997 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-22 20:02:29.272006 | orchestrator | Sunday 22 June 2025 20:00:19 +0000 (0:00:00.969) 0:00:25.979 *********** 2025-06-22 20:02:29.272016 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.272025 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:29.272035 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:29.272044 | orchestrator | 2025-06-22 20:02:29.272053 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-22 20:02:29.272063 | orchestrator | Sunday 22 June 2025 20:00:20 +0000 (0:00:00.541) 0:00:26.520 *********** 2025-06-22 20:02:29.272073 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 20:02:29.272082 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 20:02:29.272092 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:02:29.272101 | orchestrator | 2025-06-22 20:02:29.272111 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-22 20:02:29.272120 | orchestrator | Sunday 22 June 2025 20:00:21 +0000 (0:00:00.984) 0:00:27.505 *********** 2025-06-22 20:02:29.272158 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:29.272170 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:29.272179 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:29.272189 | orchestrator | 2025-06-22 20:02:29.272198 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-22 20:02:29.272208 | orchestrator | Sunday 22 June 2025 20:00:21 +0000 (0:00:00.304) 0:00:27.809 *********** 2025-06-22 20:02:29.272218 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-22 20:02:29.272227 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-22 20:02:29.272237 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-22 20:02:29.272251 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-22 20:02:29.272261 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-22 20:02:29.272271 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-22 20:02:29.272280 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-22 20:02:29.272290 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-22 20:02:29.272300 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-22 20:02:29.272309 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-22 20:02:29.272319 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-22 20:02:29.272328 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-22 20:02:29.272344 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-22 20:02:29.272354 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-22 20:02:29.272363 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-22 20:02:29.272373 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:02:29.272383 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:02:29.272392 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:02:29.272402 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:02:29.272412 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:02:29.272421 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:02:29.272431 | orchestrator | 2025-06-22 20:02:29.272441 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-22 20:02:29.272450 | orchestrator | Sunday 22 June 2025 20:00:30 +0000 (0:00:08.811) 0:00:36.621 *********** 2025-06-22 20:02:29.272460 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:02:29.272469 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:02:29.272479 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:02:29.272489 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:02:29.272499 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:02:29.272514 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:02:29.272524 | orchestrator | 2025-06-22 20:02:29.272534 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-22 20:02:29.272544 | orchestrator | Sunday 22 June 2025 20:00:32 +0000 (0:00:02.696) 0:00:39.317 *********** 2025-06-22 20:02:29.272554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.272570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.272586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:02:29.272597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:02:29.272613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:02:29.272624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:02:29.272638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.272654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.272664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:02:29.272674 | orchestrator | 2025-06-22 20:02:29.272684 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:02:29.272693 | orchestrator | Sunday 22 June 2025 20:00:35 +0000 (0:00:02.314) 0:00:41.632 *********** 2025-06-22 20:02:29.272703 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.272713 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:29.272723 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:29.272732 | orchestrator | 2025-06-22 20:02:29.272742 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-22 20:02:29.272752 | orchestrator | Sunday 22 June 2025 20:00:35 +0000 (0:00:00.300) 0:00:41.932 *********** 2025-06-22 20:02:29.272761 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:29.272771 | orchestrator | 2025-06-22 20:02:29.272781 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-22 20:02:29.272790 | orchestrator | Sunday 22 June 2025 20:00:37 +0000 (0:00:02.301) 0:00:44.234 *********** 2025-06-22 20:02:29.272800 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:29.272810 | orchestrator | 2025-06-22 20:02:29.272819 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-22 20:02:29.272829 | orchestrator | Sunday 22 June 2025 20:00:40 +0000 (0:00:02.943) 0:00:47.177 *********** 2025-06-22 20:02:29.272839 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:29.272849 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:29.272859 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:29.272868 | orchestrator | 2025-06-22 20:02:29.272878 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-22 20:02:29.272888 | orchestrator | Sunday 22 June 2025 20:00:41 +0000 (0:00:00.993) 0:00:48.170 *********** 2025-06-22 20:02:29.272898 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:29.272907 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:29.272922 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:29.272932 | orchestrator | 2025-06-22 20:02:29.272941 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-22 20:02:29.272951 | orchestrator | Sunday 22 June 2025 20:00:42 +0000 (0:00:00.345) 0:00:48.516 *********** 2025-06-22 20:02:29.272961 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.272970 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:29.272980 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:29.272989 | orchestrator | 2025-06-22 20:02:29.272999 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-22 20:02:29.273009 | orchestrator | Sunday 22 June 2025 20:00:42 +0000 (0:00:00.360) 0:00:48.877 *********** 2025-06-22 20:02:29.273019 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:29.273033 | orchestrator | 2025-06-22 20:02:29.273043 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-22 20:02:29.273053 | orchestrator | Sunday 22 June 2025 20:00:56 +0000 (0:00:13.854) 0:01:02.731 *********** 2025-06-22 20:02:29.273062 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:29.273072 | orchestrator | 2025-06-22 20:02:29.273081 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-22 20:02:29.273091 | orchestrator | Sunday 22 June 2025 20:01:06 +0000 (0:00:10.034) 0:01:12.766 *********** 2025-06-22 20:02:29.273100 | orchestrator | 2025-06-22 20:02:29.273110 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-22 20:02:29.273120 | orchestrator | Sunday 22 June 2025 20:01:06 +0000 (0:00:00.270) 0:01:13.036 *********** 2025-06-22 20:02:29.273149 | orchestrator | 2025-06-22 20:02:29.273161 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-22 20:02:29.273177 | orchestrator | Sunday 22 June 2025 20:01:06 +0000 (0:00:00.067) 0:01:13.104 *********** 2025-06-22 20:02:29.273194 | orchestrator | 2025-06-22 20:02:29.273210 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-22 20:02:29.273227 | orchestrator | Sunday 22 June 2025 20:01:06 +0000 (0:00:00.069) 0:01:13.173 *********** 2025-06-22 20:02:29.273237 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:29.273246 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:29.273256 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:29.273265 | orchestrator | 2025-06-22 20:02:29.273275 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-22 20:02:29.273289 | orchestrator | Sunday 22 June 2025 20:01:21 +0000 (0:00:15.219) 0:01:28.392 *********** 2025-06-22 20:02:29.273299 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:29.273309 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:29.273318 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:29.273327 | orchestrator | 2025-06-22 20:02:29.273337 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-22 20:02:29.273346 | orchestrator | Sunday 22 June 2025 20:01:31 +0000 (0:00:10.005) 0:01:38.398 *********** 2025-06-22 20:02:29.273356 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:29.273365 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:29.273375 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:29.273384 | orchestrator | 2025-06-22 20:02:29.273394 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:02:29.273403 | orchestrator | Sunday 22 June 2025 20:01:38 +0000 (0:00:06.074) 0:01:44.473 *********** 2025-06-22 20:02:29.273413 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:29.273423 | orchestrator | 2025-06-22 20:02:29.273432 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-22 20:02:29.273442 | orchestrator | Sunday 22 June 2025 20:01:38 +0000 (0:00:00.774) 0:01:45.247 *********** 2025-06-22 20:02:29.273451 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:29.273461 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:29.273470 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:29.273480 | orchestrator | 2025-06-22 20:02:29.273489 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-22 20:02:29.273499 | orchestrator | Sunday 22 June 2025 20:01:39 +0000 (0:00:00.708) 0:01:45.955 *********** 2025-06-22 20:02:29.273508 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:29.273518 | orchestrator | 2025-06-22 20:02:29.273527 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-22 20:02:29.273537 | orchestrator | Sunday 22 June 2025 20:01:41 +0000 (0:00:01.763) 0:01:47.718 *********** 2025-06-22 20:02:29.273546 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-22 20:02:29.273555 | orchestrator | 2025-06-22 20:02:29.273565 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-22 20:02:29.273575 | orchestrator | Sunday 22 June 2025 20:01:51 +0000 (0:00:10.084) 0:01:57.803 *********** 2025-06-22 20:02:29.273590 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-22 20:02:29.273600 | orchestrator | 2025-06-22 20:02:29.273609 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-22 20:02:29.273619 | orchestrator | Sunday 22 June 2025 20:02:11 +0000 (0:00:19.958) 0:02:17.762 *********** 2025-06-22 20:02:29.273628 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-22 20:02:29.273638 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-22 20:02:29.273647 | orchestrator | 2025-06-22 20:02:29.273657 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-22 20:02:29.273666 | orchestrator | Sunday 22 June 2025 20:02:22 +0000 (0:00:11.666) 0:02:29.428 *********** 2025-06-22 20:02:29.273676 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.273685 | orchestrator | 2025-06-22 20:02:29.273695 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-22 20:02:29.273704 | orchestrator | Sunday 22 June 2025 20:02:23 +0000 (0:00:00.342) 0:02:29.771 *********** 2025-06-22 20:02:29.273714 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.273723 | orchestrator | 2025-06-22 20:02:29.273733 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-22 20:02:29.273748 | orchestrator | Sunday 22 June 2025 20:02:23 +0000 (0:00:00.142) 0:02:29.913 *********** 2025-06-22 20:02:29.273758 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.273768 | orchestrator | 2025-06-22 20:02:29.273778 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-22 20:02:29.273787 | orchestrator | Sunday 22 June 2025 20:02:23 +0000 (0:00:00.135) 0:02:30.048 *********** 2025-06-22 20:02:29.273797 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.273806 | orchestrator | 2025-06-22 20:02:29.273816 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-22 20:02:29.273826 | orchestrator | Sunday 22 June 2025 20:02:23 +0000 (0:00:00.309) 0:02:30.358 *********** 2025-06-22 20:02:29.273836 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:29.273845 | orchestrator | 2025-06-22 20:02:29.273855 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:02:29.273865 | orchestrator | Sunday 22 June 2025 20:02:26 +0000 (0:00:02.929) 0:02:33.288 *********** 2025-06-22 20:02:29.273874 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:29.273884 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:29.273893 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:29.273903 | orchestrator | 2025-06-22 20:02:29.273912 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:02:29.273923 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-22 20:02:29.273933 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-22 20:02:29.273943 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-22 20:02:29.273952 | orchestrator | 2025-06-22 20:02:29.273962 | orchestrator | 2025-06-22 20:02:29.273972 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:02:29.273981 | orchestrator | Sunday 22 June 2025 20:02:27 +0000 (0:00:00.484) 0:02:33.772 *********** 2025-06-22 20:02:29.273991 | orchestrator | =============================================================================== 2025-06-22 20:02:29.274005 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.96s 2025-06-22 20:02:29.274058 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 15.22s 2025-06-22 20:02:29.274071 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.85s 2025-06-22 20:02:29.274090 | orchestrator | service-ks-register : keystone | Creating endpoints -------------------- 11.67s 2025-06-22 20:02:29.274099 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.08s 2025-06-22 20:02:29.274109 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.03s 2025-06-22 20:02:29.274119 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.01s 2025-06-22 20:02:29.274249 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.81s 2025-06-22 20:02:29.274280 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.07s 2025-06-22 20:02:29.274290 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.58s 2025-06-22 20:02:29.274300 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.52s 2025-06-22 20:02:29.274310 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.94s 2025-06-22 20:02:29.274319 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.94s 2025-06-22 20:02:29.274329 | orchestrator | keystone : Creating default user role ----------------------------------- 2.93s 2025-06-22 20:02:29.274338 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.70s 2025-06-22 20:02:29.274348 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.31s 2025-06-22 20:02:29.274357 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.30s 2025-06-22 20:02:29.274367 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.21s 2025-06-22 20:02:29.274376 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.17s 2025-06-22 20:02:29.274386 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.76s 2025-06-22 20:02:32.306510 | orchestrator | 2025-06-22 20:02:32 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:02:32.307220 | orchestrator | 2025-06-22 20:02:32 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:32.307815 | orchestrator | 2025-06-22 20:02:32 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:02:32.308510 | orchestrator | 2025-06-22 20:02:32 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:02:32.309355 | orchestrator | 2025-06-22 20:02:32 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:02:32.309377 | orchestrator | 2025-06-22 20:02:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:35.350667 | orchestrator | 2025-06-22 20:02:35 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:02:35.352280 | orchestrator | 2025-06-22 20:02:35 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:35.352850 | orchestrator | 2025-06-22 20:02:35 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:02:35.354125 | orchestrator | 2025-06-22 20:02:35 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:02:35.355413 | orchestrator | 2025-06-22 20:02:35 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:02:35.355463 | orchestrator | 2025-06-22 20:02:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:38.400650 | orchestrator | 2025-06-22 20:02:38 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:02:38.402841 | orchestrator | 2025-06-22 20:02:38 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:38.405872 | orchestrator | 2025-06-22 20:02:38 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:02:38.409579 | orchestrator | 2025-06-22 20:02:38 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:02:38.411540 | orchestrator | 2025-06-22 20:02:38 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:02:38.411735 | orchestrator | 2025-06-22 20:02:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:41.447214 | orchestrator | 2025-06-22 20:02:41 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:02:41.448487 | orchestrator | 2025-06-22 20:02:41 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:41.449372 | orchestrator | 2025-06-22 20:02:41 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:02:41.449684 | orchestrator | 2025-06-22 20:02:41 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:02:41.451666 | orchestrator | 2025-06-22 20:02:41 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:02:41.451705 | orchestrator | 2025-06-22 20:02:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:44.500579 | orchestrator | 2025-06-22 20:02:44 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:02:44.503913 | orchestrator | 2025-06-22 20:02:44 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state STARTED 2025-06-22 20:02:44.505499 | orchestrator | 2025-06-22 20:02:44 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:02:44.507419 | orchestrator | 2025-06-22 20:02:44 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:02:44.509532 | orchestrator | 2025-06-22 20:02:44 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:02:44.509604 | orchestrator | 2025-06-22 20:02:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:47.570092 | orchestrator | 2025-06-22 20:02:47 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:02:47.574053 | orchestrator | 2025-06-22 20:02:47 | INFO  | Task b47a854a-7c46-49d7-971a-55e5f4ebe5a7 is in state SUCCESS 2025-06-22 20:02:47.577381 | orchestrator | 2025-06-22 20:02:47 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:02:47.578842 | orchestrator | 2025-06-22 20:02:47 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:02:47.580739 | orchestrator | 2025-06-22 20:02:47 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:02:47.582614 | orchestrator | 2025-06-22 20:02:47 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:02:47.582638 | orchestrator | 2025-06-22 20:02:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:50.641688 | orchestrator | 2025-06-22 20:02:50 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:02:50.643232 | orchestrator | 2025-06-22 20:02:50 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:02:50.645022 | orchestrator | 2025-06-22 20:02:50 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:02:50.646524 | orchestrator | 2025-06-22 20:02:50 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:02:50.648415 | orchestrator | 2025-06-22 20:02:50 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:02:50.648444 | orchestrator | 2025-06-22 20:02:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:53.695106 | orchestrator | 2025-06-22 20:02:53 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:02:53.695869 | orchestrator | 2025-06-22 20:02:53 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:02:53.697265 | orchestrator | 2025-06-22 20:02:53 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:02:53.698008 | orchestrator | 2025-06-22 20:02:53 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:02:53.699296 | orchestrator | 2025-06-22 20:02:53 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:02:53.699319 | orchestrator | 2025-06-22 20:02:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:56.742291 | orchestrator | 2025-06-22 20:02:56 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:02:56.744881 | orchestrator | 2025-06-22 20:02:56 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:02:56.746157 | orchestrator | 2025-06-22 20:02:56 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:02:56.747887 | orchestrator | 2025-06-22 20:02:56 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:02:56.749871 | orchestrator | 2025-06-22 20:02:56 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:02:56.749905 | orchestrator | 2025-06-22 20:02:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:59.806488 | orchestrator | 2025-06-22 20:02:59 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:02:59.808363 | orchestrator | 2025-06-22 20:02:59 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:02:59.810965 | orchestrator | 2025-06-22 20:02:59 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:02:59.812929 | orchestrator | 2025-06-22 20:02:59 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:02:59.814766 | orchestrator | 2025-06-22 20:02:59 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:02:59.814808 | orchestrator | 2025-06-22 20:02:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:02.852875 | orchestrator | 2025-06-22 20:03:02 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:02.853861 | orchestrator | 2025-06-22 20:03:02 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:02.854532 | orchestrator | 2025-06-22 20:03:02 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:02.856642 | orchestrator | 2025-06-22 20:03:02 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:02.857566 | orchestrator | 2025-06-22 20:03:02 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:02.857590 | orchestrator | 2025-06-22 20:03:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:05.909616 | orchestrator | 2025-06-22 20:03:05 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:05.910826 | orchestrator | 2025-06-22 20:03:05 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:05.910879 | orchestrator | 2025-06-22 20:03:05 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:05.912479 | orchestrator | 2025-06-22 20:03:05 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:05.913141 | orchestrator | 2025-06-22 20:03:05 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:05.915456 | orchestrator | 2025-06-22 20:03:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:08.952107 | orchestrator | 2025-06-22 20:03:08 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:08.955014 | orchestrator | 2025-06-22 20:03:08 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:08.956565 | orchestrator | 2025-06-22 20:03:08 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:08.959638 | orchestrator | 2025-06-22 20:03:08 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:08.959690 | orchestrator | 2025-06-22 20:03:08 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:08.959702 | orchestrator | 2025-06-22 20:03:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:11.980585 | orchestrator | 2025-06-22 20:03:11 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:11.981481 | orchestrator | 2025-06-22 20:03:11 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:11.982802 | orchestrator | 2025-06-22 20:03:11 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:11.985051 | orchestrator | 2025-06-22 20:03:11 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:11.985647 | orchestrator | 2025-06-22 20:03:11 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:11.985672 | orchestrator | 2025-06-22 20:03:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:15.014361 | orchestrator | 2025-06-22 20:03:15 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:15.014447 | orchestrator | 2025-06-22 20:03:15 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:15.014880 | orchestrator | 2025-06-22 20:03:15 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:15.017000 | orchestrator | 2025-06-22 20:03:15 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:15.017448 | orchestrator | 2025-06-22 20:03:15 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:15.017471 | orchestrator | 2025-06-22 20:03:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:18.053908 | orchestrator | 2025-06-22 20:03:18 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:18.054477 | orchestrator | 2025-06-22 20:03:18 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:18.055586 | orchestrator | 2025-06-22 20:03:18 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:18.058309 | orchestrator | 2025-06-22 20:03:18 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:18.059269 | orchestrator | 2025-06-22 20:03:18 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:18.059297 | orchestrator | 2025-06-22 20:03:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:21.093167 | orchestrator | 2025-06-22 20:03:21 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:21.093699 | orchestrator | 2025-06-22 20:03:21 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:21.094414 | orchestrator | 2025-06-22 20:03:21 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:21.094901 | orchestrator | 2025-06-22 20:03:21 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:21.095722 | orchestrator | 2025-06-22 20:03:21 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:21.095747 | orchestrator | 2025-06-22 20:03:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:24.117007 | orchestrator | 2025-06-22 20:03:24 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:24.117084 | orchestrator | 2025-06-22 20:03:24 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:24.117707 | orchestrator | 2025-06-22 20:03:24 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:24.118077 | orchestrator | 2025-06-22 20:03:24 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:24.118533 | orchestrator | 2025-06-22 20:03:24 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:24.119444 | orchestrator | 2025-06-22 20:03:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:27.145349 | orchestrator | 2025-06-22 20:03:27 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:27.145698 | orchestrator | 2025-06-22 20:03:27 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:27.146259 | orchestrator | 2025-06-22 20:03:27 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:27.146974 | orchestrator | 2025-06-22 20:03:27 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:27.148521 | orchestrator | 2025-06-22 20:03:27 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:27.148545 | orchestrator | 2025-06-22 20:03:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:30.180467 | orchestrator | 2025-06-22 20:03:30 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:30.180557 | orchestrator | 2025-06-22 20:03:30 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:30.180572 | orchestrator | 2025-06-22 20:03:30 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:30.180584 | orchestrator | 2025-06-22 20:03:30 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:30.180595 | orchestrator | 2025-06-22 20:03:30 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:30.180606 | orchestrator | 2025-06-22 20:03:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:33.210920 | orchestrator | 2025-06-22 20:03:33 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:33.214167 | orchestrator | 2025-06-22 20:03:33 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:33.224048 | orchestrator | 2025-06-22 20:03:33 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:33.224785 | orchestrator | 2025-06-22 20:03:33 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:33.225547 | orchestrator | 2025-06-22 20:03:33 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:33.225573 | orchestrator | 2025-06-22 20:03:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:36.259864 | orchestrator | 2025-06-22 20:03:36 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:36.260325 | orchestrator | 2025-06-22 20:03:36 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:36.261060 | orchestrator | 2025-06-22 20:03:36 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:36.261827 | orchestrator | 2025-06-22 20:03:36 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:36.262454 | orchestrator | 2025-06-22 20:03:36 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:36.262484 | orchestrator | 2025-06-22 20:03:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:39.298319 | orchestrator | 2025-06-22 20:03:39 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:39.300319 | orchestrator | 2025-06-22 20:03:39 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:39.301863 | orchestrator | 2025-06-22 20:03:39 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:39.303363 | orchestrator | 2025-06-22 20:03:39 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:39.304750 | orchestrator | 2025-06-22 20:03:39 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:39.304778 | orchestrator | 2025-06-22 20:03:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:42.336835 | orchestrator | 2025-06-22 20:03:42 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:42.337305 | orchestrator | 2025-06-22 20:03:42 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:42.338097 | orchestrator | 2025-06-22 20:03:42 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:42.338779 | orchestrator | 2025-06-22 20:03:42 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:42.339977 | orchestrator | 2025-06-22 20:03:42 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:42.340027 | orchestrator | 2025-06-22 20:03:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:45.391010 | orchestrator | 2025-06-22 20:03:45 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:45.394433 | orchestrator | 2025-06-22 20:03:45 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:45.395016 | orchestrator | 2025-06-22 20:03:45 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:45.397409 | orchestrator | 2025-06-22 20:03:45 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:45.397869 | orchestrator | 2025-06-22 20:03:45 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:45.397897 | orchestrator | 2025-06-22 20:03:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:48.451636 | orchestrator | 2025-06-22 20:03:48 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:48.454620 | orchestrator | 2025-06-22 20:03:48 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:48.455327 | orchestrator | 2025-06-22 20:03:48 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:48.456120 | orchestrator | 2025-06-22 20:03:48 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:48.456846 | orchestrator | 2025-06-22 20:03:48 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:48.458207 | orchestrator | 2025-06-22 20:03:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:51.489920 | orchestrator | 2025-06-22 20:03:51 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:51.490393 | orchestrator | 2025-06-22 20:03:51 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:51.491112 | orchestrator | 2025-06-22 20:03:51 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:51.492033 | orchestrator | 2025-06-22 20:03:51 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:51.492906 | orchestrator | 2025-06-22 20:03:51 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:51.492950 | orchestrator | 2025-06-22 20:03:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:54.537717 | orchestrator | 2025-06-22 20:03:54 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:54.540639 | orchestrator | 2025-06-22 20:03:54 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:54.542120 | orchestrator | 2025-06-22 20:03:54 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:54.543577 | orchestrator | 2025-06-22 20:03:54 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:54.544986 | orchestrator | 2025-06-22 20:03:54 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:54.545089 | orchestrator | 2025-06-22 20:03:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:57.592739 | orchestrator | 2025-06-22 20:03:57 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:03:57.593262 | orchestrator | 2025-06-22 20:03:57 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:03:57.595934 | orchestrator | 2025-06-22 20:03:57 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:03:57.596794 | orchestrator | 2025-06-22 20:03:57 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:03:57.597628 | orchestrator | 2025-06-22 20:03:57 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:03:57.597651 | orchestrator | 2025-06-22 20:03:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:00.626371 | orchestrator | 2025-06-22 20:04:00 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:04:00.626585 | orchestrator | 2025-06-22 20:04:00 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:00.627082 | orchestrator | 2025-06-22 20:04:00 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:04:00.627762 | orchestrator | 2025-06-22 20:04:00 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:00.628557 | orchestrator | 2025-06-22 20:04:00 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:00.628631 | orchestrator | 2025-06-22 20:04:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:03.656260 | orchestrator | 2025-06-22 20:04:03 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:04:03.656641 | orchestrator | 2025-06-22 20:04:03 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:03.657546 | orchestrator | 2025-06-22 20:04:03 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:04:03.658090 | orchestrator | 2025-06-22 20:04:03 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:03.658842 | orchestrator | 2025-06-22 20:04:03 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:03.658872 | orchestrator | 2025-06-22 20:04:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:06.690356 | orchestrator | 2025-06-22 20:04:06 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:04:06.690639 | orchestrator | 2025-06-22 20:04:06 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:06.692856 | orchestrator | 2025-06-22 20:04:06 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:04:06.693630 | orchestrator | 2025-06-22 20:04:06 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:06.694419 | orchestrator | 2025-06-22 20:04:06 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:06.694561 | orchestrator | 2025-06-22 20:04:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:09.725106 | orchestrator | 2025-06-22 20:04:09 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:04:09.728004 | orchestrator | 2025-06-22 20:04:09 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:09.729808 | orchestrator | 2025-06-22 20:04:09 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:04:09.731530 | orchestrator | 2025-06-22 20:04:09 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:09.733362 | orchestrator | 2025-06-22 20:04:09 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:09.733563 | orchestrator | 2025-06-22 20:04:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:12.764990 | orchestrator | 2025-06-22 20:04:12 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:04:12.766521 | orchestrator | 2025-06-22 20:04:12 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:12.770839 | orchestrator | 2025-06-22 20:04:12 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state STARTED 2025-06-22 20:04:12.772371 | orchestrator | 2025-06-22 20:04:12 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:12.773819 | orchestrator | 2025-06-22 20:04:12 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:12.773845 | orchestrator | 2025-06-22 20:04:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:15.815732 | orchestrator | 2025-06-22 20:04:15 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:04:15.815846 | orchestrator | 2025-06-22 20:04:15 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:15.816502 | orchestrator | 2025-06-22 20:04:15 | INFO  | Task 76c16166-9afe-4034-9605-a766c2b42b1b is in state SUCCESS 2025-06-22 20:04:15.817031 | orchestrator | 2025-06-22 20:04:15.817059 | orchestrator | 2025-06-22 20:04:15.817070 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-22 20:04:15.817080 | orchestrator | 2025-06-22 20:04:15.817090 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-22 20:04:15.817100 | orchestrator | Sunday 22 June 2025 20:01:49 +0000 (0:00:00.250) 0:00:00.250 *********** 2025-06-22 20:04:15.817111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-22 20:04:15.817122 | orchestrator | 2025-06-22 20:04:15.817132 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-22 20:04:15.817202 | orchestrator | Sunday 22 June 2025 20:01:49 +0000 (0:00:00.211) 0:00:00.461 *********** 2025-06-22 20:04:15.817222 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-22 20:04:15.817238 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-22 20:04:15.817255 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-22 20:04:15.817272 | orchestrator | 2025-06-22 20:04:15.817290 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-22 20:04:15.817306 | orchestrator | Sunday 22 June 2025 20:01:50 +0000 (0:00:01.252) 0:00:01.714 *********** 2025-06-22 20:04:15.817325 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-22 20:04:15.817341 | orchestrator | 2025-06-22 20:04:15.817357 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-22 20:04:15.817374 | orchestrator | Sunday 22 June 2025 20:01:52 +0000 (0:00:01.148) 0:00:02.863 *********** 2025-06-22 20:04:15.817391 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.817409 | orchestrator | 2025-06-22 20:04:15.817419 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-22 20:04:15.817432 | orchestrator | Sunday 22 June 2025 20:01:53 +0000 (0:00:01.067) 0:00:03.930 *********** 2025-06-22 20:04:15.817448 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.817464 | orchestrator | 2025-06-22 20:04:15.817480 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-22 20:04:15.817497 | orchestrator | Sunday 22 June 2025 20:01:54 +0000 (0:00:00.936) 0:00:04.867 *********** 2025-06-22 20:04:15.817513 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-22 20:04:15.817530 | orchestrator | ok: [testbed-manager] 2025-06-22 20:04:15.817542 | orchestrator | 2025-06-22 20:04:15.817559 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-22 20:04:15.817575 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:41.810) 0:00:46.678 *********** 2025-06-22 20:04:15.817592 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-22 20:04:15.817609 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-22 20:04:15.817625 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-22 20:04:15.817641 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-22 20:04:15.817660 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-22 20:04:15.817677 | orchestrator | 2025-06-22 20:04:15.817751 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-22 20:04:15.817771 | orchestrator | Sunday 22 June 2025 20:02:39 +0000 (0:00:03.512) 0:00:50.190 *********** 2025-06-22 20:04:15.817789 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-22 20:04:15.817807 | orchestrator | 2025-06-22 20:04:15.817824 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-22 20:04:15.817841 | orchestrator | Sunday 22 June 2025 20:02:39 +0000 (0:00:00.403) 0:00:50.594 *********** 2025-06-22 20:04:15.817858 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:04:15.817874 | orchestrator | 2025-06-22 20:04:15.817891 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-22 20:04:15.817903 | orchestrator | Sunday 22 June 2025 20:02:39 +0000 (0:00:00.120) 0:00:50.714 *********** 2025-06-22 20:04:15.817914 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:04:15.817925 | orchestrator | 2025-06-22 20:04:15.817937 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-22 20:04:15.817955 | orchestrator | Sunday 22 June 2025 20:02:40 +0000 (0:00:00.268) 0:00:50.982 *********** 2025-06-22 20:04:15.817989 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.818007 | orchestrator | 2025-06-22 20:04:15.818082 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-22 20:04:15.818100 | orchestrator | Sunday 22 June 2025 20:02:42 +0000 (0:00:02.023) 0:00:53.006 *********** 2025-06-22 20:04:15.818132 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.818171 | orchestrator | 2025-06-22 20:04:15.818188 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-22 20:04:15.818204 | orchestrator | Sunday 22 June 2025 20:02:42 +0000 (0:00:00.765) 0:00:53.772 *********** 2025-06-22 20:04:15.818222 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.818238 | orchestrator | 2025-06-22 20:04:15.818254 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-22 20:04:15.818270 | orchestrator | Sunday 22 June 2025 20:02:43 +0000 (0:00:00.601) 0:00:54.374 *********** 2025-06-22 20:04:15.818287 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-22 20:04:15.818304 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-22 20:04:15.818320 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-22 20:04:15.818336 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-22 20:04:15.818352 | orchestrator | 2025-06-22 20:04:15.818368 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:04:15.818385 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:04:15.818403 | orchestrator | 2025-06-22 20:04:15.818420 | orchestrator | 2025-06-22 20:04:15.818453 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:04:15.818471 | orchestrator | Sunday 22 June 2025 20:02:45 +0000 (0:00:01.531) 0:00:55.905 *********** 2025-06-22 20:04:15.818488 | orchestrator | =============================================================================== 2025-06-22 20:04:15.818504 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.81s 2025-06-22 20:04:15.818521 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.51s 2025-06-22 20:04:15.818537 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.02s 2025-06-22 20:04:15.818552 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.53s 2025-06-22 20:04:15.818624 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2025-06-22 20:04:15.818642 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.15s 2025-06-22 20:04:15.818658 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.07s 2025-06-22 20:04:15.818708 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.94s 2025-06-22 20:04:15.818726 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.77s 2025-06-22 20:04:15.818743 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2025-06-22 20:04:15.818760 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.40s 2025-06-22 20:04:15.818778 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.27s 2025-06-22 20:04:15.818795 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-06-22 20:04:15.818811 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-06-22 20:04:15.818828 | orchestrator | 2025-06-22 20:04:15.818845 | orchestrator | 2025-06-22 20:04:15.818861 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-22 20:04:15.818877 | orchestrator | 2025-06-22 20:04:15.818894 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-22 20:04:15.818911 | orchestrator | Sunday 22 June 2025 20:02:49 +0000 (0:00:00.301) 0:00:00.301 *********** 2025-06-22 20:04:15.818928 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.818946 | orchestrator | 2025-06-22 20:04:15.818962 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-22 20:04:15.818978 | orchestrator | Sunday 22 June 2025 20:02:51 +0000 (0:00:01.528) 0:00:01.830 *********** 2025-06-22 20:04:15.818994 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.819010 | orchestrator | 2025-06-22 20:04:15.819027 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-22 20:04:15.819055 | orchestrator | Sunday 22 June 2025 20:02:52 +0000 (0:00:01.046) 0:00:02.877 *********** 2025-06-22 20:04:15.819071 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.819088 | orchestrator | 2025-06-22 20:04:15.819104 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-22 20:04:15.819119 | orchestrator | Sunday 22 June 2025 20:02:53 +0000 (0:00:01.072) 0:00:03.950 *********** 2025-06-22 20:04:15.819129 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.819172 | orchestrator | 2025-06-22 20:04:15.819184 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-22 20:04:15.819207 | orchestrator | Sunday 22 June 2025 20:02:54 +0000 (0:00:01.200) 0:00:05.150 *********** 2025-06-22 20:04:15.819216 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.819237 | orchestrator | 2025-06-22 20:04:15.819248 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-22 20:04:15.819257 | orchestrator | Sunday 22 June 2025 20:02:55 +0000 (0:00:01.025) 0:00:06.175 *********** 2025-06-22 20:04:15.819267 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.819276 | orchestrator | 2025-06-22 20:04:15.819286 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-22 20:04:15.819295 | orchestrator | Sunday 22 June 2025 20:02:56 +0000 (0:00:01.051) 0:00:07.227 *********** 2025-06-22 20:04:15.819305 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.819314 | orchestrator | 2025-06-22 20:04:15.819324 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-22 20:04:15.819333 | orchestrator | Sunday 22 June 2025 20:02:58 +0000 (0:00:02.043) 0:00:09.270 *********** 2025-06-22 20:04:15.819343 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.819360 | orchestrator | 2025-06-22 20:04:15.819370 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-22 20:04:15.819380 | orchestrator | Sunday 22 June 2025 20:03:00 +0000 (0:00:01.190) 0:00:10.461 *********** 2025-06-22 20:04:15.819389 | orchestrator | changed: [testbed-manager] 2025-06-22 20:04:15.819400 | orchestrator | 2025-06-22 20:04:15.819417 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-22 20:04:15.819435 | orchestrator | Sunday 22 June 2025 20:03:48 +0000 (0:00:48.423) 0:00:58.884 *********** 2025-06-22 20:04:15.819451 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:04:15.819467 | orchestrator | 2025-06-22 20:04:15.819484 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-22 20:04:15.819501 | orchestrator | 2025-06-22 20:04:15.819519 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-22 20:04:15.819535 | orchestrator | Sunday 22 June 2025 20:03:48 +0000 (0:00:00.145) 0:00:59.030 *********** 2025-06-22 20:04:15.819553 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:15.819569 | orchestrator | 2025-06-22 20:04:15.819586 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-22 20:04:15.819603 | orchestrator | 2025-06-22 20:04:15.819620 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-22 20:04:15.819638 | orchestrator | Sunday 22 June 2025 20:04:00 +0000 (0:00:11.596) 0:01:10.626 *********** 2025-06-22 20:04:15.819654 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:04:15.819674 | orchestrator | 2025-06-22 20:04:15.819691 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-22 20:04:15.819708 | orchestrator | 2025-06-22 20:04:15.819737 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-22 20:04:15.819756 | orchestrator | Sunday 22 June 2025 20:04:11 +0000 (0:00:11.183) 0:01:21.810 *********** 2025-06-22 20:04:15.819774 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:04:15.819790 | orchestrator | 2025-06-22 20:04:15.819808 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:04:15.819825 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 20:04:15.819854 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:04:15.819872 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:04:15.819888 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:04:15.819905 | orchestrator | 2025-06-22 20:04:15.819923 | orchestrator | 2025-06-22 20:04:15.819940 | orchestrator | 2025-06-22 20:04:15.819958 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:04:15.819975 | orchestrator | Sunday 22 June 2025 20:04:12 +0000 (0:00:01.011) 0:01:22.821 *********** 2025-06-22 20:04:15.819992 | orchestrator | =============================================================================== 2025-06-22 20:04:15.820008 | orchestrator | Create admin user ------------------------------------------------------ 48.42s 2025-06-22 20:04:15.820025 | orchestrator | Restart ceph manager service ------------------------------------------- 23.79s 2025-06-22 20:04:15.820042 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.04s 2025-06-22 20:04:15.820059 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.53s 2025-06-22 20:04:15.820076 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.20s 2025-06-22 20:04:15.820093 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2025-06-22 20:04:15.820110 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.07s 2025-06-22 20:04:15.820127 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.05s 2025-06-22 20:04:15.820286 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.05s 2025-06-22 20:04:15.820315 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.03s 2025-06-22 20:04:15.820325 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2025-06-22 20:04:15.820335 | orchestrator | 2025-06-22 20:04:15 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:15.820460 | orchestrator | 2025-06-22 20:04:15 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:15.820473 | orchestrator | 2025-06-22 20:04:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:18.859722 | orchestrator | 2025-06-22 20:04:18 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:04:18.859826 | orchestrator | 2025-06-22 20:04:18 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:18.860837 | orchestrator | 2025-06-22 20:04:18 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:18.860873 | orchestrator | 2025-06-22 20:04:18 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:18.860890 | orchestrator | 2025-06-22 20:04:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:21.894862 | orchestrator | 2025-06-22 20:04:21 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:04:21.894959 | orchestrator | 2025-06-22 20:04:21 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:21.895684 | orchestrator | 2025-06-22 20:04:21 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:21.897132 | orchestrator | 2025-06-22 20:04:21 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:21.897224 | orchestrator | 2025-06-22 20:04:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:24.943201 | orchestrator | 2025-06-22 20:04:24 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:04:24.947866 | orchestrator | 2025-06-22 20:04:24 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:24.947926 | orchestrator | 2025-06-22 20:04:24 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:24.949871 | orchestrator | 2025-06-22 20:04:24 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:24.949896 | orchestrator | 2025-06-22 20:04:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:27.996860 | orchestrator | 2025-06-22 20:04:27 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state STARTED 2025-06-22 20:04:27.997364 | orchestrator | 2025-06-22 20:04:27 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:27.998860 | orchestrator | 2025-06-22 20:04:27 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:27.999539 | orchestrator | 2025-06-22 20:04:27 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:27.999563 | orchestrator | 2025-06-22 20:04:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:31.045057 | orchestrator | 2025-06-22 20:04:31 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:04:31.045813 | orchestrator | 2025-06-22 20:04:31 | INFO  | Task c7872f7a-934d-4c9f-abe5-9ae6dbe2e02a is in state SUCCESS 2025-06-22 20:04:31.047030 | orchestrator | 2025-06-22 20:04:31.047164 | orchestrator | 2025-06-22 20:04:31.047179 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:04:31.047191 | orchestrator | 2025-06-22 20:04:31.047202 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:04:31.047214 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.259) 0:00:00.259 *********** 2025-06-22 20:04:31.047225 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:31.047237 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:31.047248 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:31.047259 | orchestrator | 2025-06-22 20:04:31.047270 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:04:31.047281 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.258) 0:00:00.517 *********** 2025-06-22 20:04:31.047292 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-22 20:04:31.047303 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-22 20:04:31.047314 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-22 20:04:31.047368 | orchestrator | 2025-06-22 20:04:31.047407 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-22 20:04:31.047421 | orchestrator | 2025-06-22 20:04:31.047432 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-22 20:04:31.047443 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.342) 0:00:00.860 *********** 2025-06-22 20:04:31.047455 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:04:31.047467 | orchestrator | 2025-06-22 20:04:31.047478 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-22 20:04:31.047489 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.471) 0:00:01.332 *********** 2025-06-22 20:04:31.047501 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-22 20:04:31.047512 | orchestrator | 2025-06-22 20:04:31.047523 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-22 20:04:31.047534 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:03.024) 0:00:04.356 *********** 2025-06-22 20:04:31.047544 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-22 20:04:31.047577 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-22 20:04:31.047589 | orchestrator | 2025-06-22 20:04:31.047600 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-22 20:04:31.047611 | orchestrator | Sunday 22 June 2025 20:02:41 +0000 (0:00:06.444) 0:00:10.800 *********** 2025-06-22 20:04:31.047622 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:04:31.047633 | orchestrator | 2025-06-22 20:04:31.047644 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-22 20:04:31.047655 | orchestrator | Sunday 22 June 2025 20:02:44 +0000 (0:00:02.889) 0:00:13.690 *********** 2025-06-22 20:04:31.047666 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:04:31.047677 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-22 20:04:31.047687 | orchestrator | 2025-06-22 20:04:31.047711 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-22 20:04:31.047722 | orchestrator | Sunday 22 June 2025 20:02:47 +0000 (0:00:03.236) 0:00:16.926 *********** 2025-06-22 20:04:31.047733 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:04:31.047744 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-22 20:04:31.047754 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-22 20:04:31.047765 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-22 20:04:31.047776 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-22 20:04:31.047786 | orchestrator | 2025-06-22 20:04:31.047797 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-22 20:04:31.047808 | orchestrator | Sunday 22 June 2025 20:03:01 +0000 (0:00:13.802) 0:00:30.729 *********** 2025-06-22 20:04:31.047818 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-22 20:04:31.047829 | orchestrator | 2025-06-22 20:04:31.047840 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-22 20:04:31.047850 | orchestrator | Sunday 22 June 2025 20:03:05 +0000 (0:00:04.216) 0:00:34.946 *********** 2025-06-22 20:04:31.047865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.047892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.047912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.047929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.047941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.047952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.047972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.047983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.048002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.048014 | orchestrator | 2025-06-22 20:04:31.048025 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-22 20:04:31.048036 | orchestrator | Sunday 22 June 2025 20:03:08 +0000 (0:00:02.760) 0:00:37.706 *********** 2025-06-22 20:04:31.048047 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-22 20:04:31.048058 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-22 20:04:31.048068 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-22 20:04:31.048079 | orchestrator | 2025-06-22 20:04:31.048090 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-22 20:04:31.048100 | orchestrator | Sunday 22 June 2025 20:03:09 +0000 (0:00:01.220) 0:00:38.927 *********** 2025-06-22 20:04:31.048111 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:31.048122 | orchestrator | 2025-06-22 20:04:31.048149 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-22 20:04:31.048161 | orchestrator | Sunday 22 June 2025 20:03:10 +0000 (0:00:00.238) 0:00:39.166 *********** 2025-06-22 20:04:31.048172 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:31.048187 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:31.048198 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:31.048209 | orchestrator | 2025-06-22 20:04:31.048220 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-22 20:04:31.048231 | orchestrator | Sunday 22 June 2025 20:03:10 +0000 (0:00:00.803) 0:00:39.969 *********** 2025-06-22 20:04:31.048241 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:04:31.048252 | orchestrator | 2025-06-22 20:04:31.048263 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-22 20:04:31.048273 | orchestrator | Sunday 22 June 2025 20:03:11 +0000 (0:00:00.444) 0:00:40.413 *********** 2025-06-22 20:04:31.048285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.048303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.048323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.048334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.048350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.048362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.048373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.048398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.048410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.048421 | orchestrator | 2025-06-22 20:04:31.048432 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-22 20:04:31.048443 | orchestrator | Sunday 22 June 2025 20:03:14 +0000 (0:00:03.182) 0:00:43.596 *********** 2025-06-22 20:04:31.048459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:04:31.048470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.048482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.048500 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:31.048519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:04:31.048531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.048542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.048554 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:31.048569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:04:31.048581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.048608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.048619 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:31.048630 | orchestrator | 2025-06-22 20:04:31.048647 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-22 20:04:31.048658 | orchestrator | Sunday 22 June 2025 20:03:16 +0000 (0:00:01.513) 0:00:45.110 *********** 2025-06-22 20:04:31.048670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:04:31.048681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.048698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.048709 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:31.048720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:04:31.048738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.048756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.048767 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:31.048779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:04:31.048791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.048806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.048818 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:31.048829 | orchestrator | 2025-06-22 20:04:31.048840 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-22 20:04:31.048867 | orchestrator | Sunday 22 June 2025 20:03:17 +0000 (0:00:00.975) 0:00:46.085 *********** 2025-06-22 20:04:31.048878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.049091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.049106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.049123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049228 | orchestrator | 2025-06-22 20:04:31.049239 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-22 20:04:31.049250 | orchestrator | Sunday 22 June 2025 20:03:20 +0000 (0:00:03.311) 0:00:49.397 *********** 2025-06-22 20:04:31.049261 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:31.049272 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:04:31.049283 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:04:31.049294 | orchestrator | 2025-06-22 20:04:31.049305 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-22 20:04:31.049316 | orchestrator | Sunday 22 June 2025 20:03:22 +0000 (0:00:02.466) 0:00:51.863 *********** 2025-06-22 20:04:31.049327 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:04:31.049338 | orchestrator | 2025-06-22 20:04:31.049349 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-22 20:04:31.049360 | orchestrator | Sunday 22 June 2025 20:03:23 +0000 (0:00:00.952) 0:00:52.816 *********** 2025-06-22 20:04:31.049370 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:31.049387 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:31.049398 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:31.049409 | orchestrator | 2025-06-22 20:04:31.049420 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-22 20:04:31.049435 | orchestrator | Sunday 22 June 2025 20:03:24 +0000 (0:00:01.029) 0:00:53.845 *********** 2025-06-22 20:04:31.049447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.049465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.049477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.049489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049573 | orchestrator | 2025-06-22 20:04:31.049584 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-22 20:04:31.049595 | orchestrator | Sunday 22 June 2025 20:03:32 +0000 (0:00:08.123) 0:01:01.969 *********** 2025-06-22 20:04:31.049607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:04:31.049628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.049640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.049651 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:31.049668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:04:31.049680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.049691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.049702 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:31.049721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:04:31.049741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.049754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:04:31.049767 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:31.049780 | orchestrator | 2025-06-22 20:04:31.049792 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-22 20:04:31.049805 | orchestrator | Sunday 22 June 2025 20:03:34 +0000 (0:00:01.382) 0:01:03.351 *********** 2025-06-22 20:04:31.049826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.049839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.049866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:04:31.049880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:04:31.049971 | orchestrator | 2025-06-22 20:04:31.049988 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-22 20:04:31.050001 | orchestrator | Sunday 22 June 2025 20:03:38 +0000 (0:00:03.672) 0:01:07.023 *********** 2025-06-22 20:04:31.050014 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:31.050077 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:31.050090 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:31.050101 | orchestrator | 2025-06-22 20:04:31.050113 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-22 20:04:31.050123 | orchestrator | Sunday 22 June 2025 20:03:38 +0000 (0:00:00.522) 0:01:07.546 *********** 2025-06-22 20:04:31.050191 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:31.050204 | orchestrator | 2025-06-22 20:04:31.050215 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-22 20:04:31.050226 | orchestrator | Sunday 22 June 2025 20:03:41 +0000 (0:00:02.573) 0:01:10.119 *********** 2025-06-22 20:04:31.050237 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:31.050248 | orchestrator | 2025-06-22 20:04:31.050259 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-22 20:04:31.050269 | orchestrator | Sunday 22 June 2025 20:03:43 +0000 (0:00:02.124) 0:01:12.244 *********** 2025-06-22 20:04:31.050280 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:31.050291 | orchestrator | 2025-06-22 20:04:31.050302 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-22 20:04:31.050313 | orchestrator | Sunday 22 June 2025 20:03:55 +0000 (0:00:12.316) 0:01:24.561 *********** 2025-06-22 20:04:31.050323 | orchestrator | 2025-06-22 20:04:31.050333 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-22 20:04:31.050342 | orchestrator | Sunday 22 June 2025 20:03:55 +0000 (0:00:00.138) 0:01:24.699 *********** 2025-06-22 20:04:31.050352 | orchestrator | 2025-06-22 20:04:31.050361 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-22 20:04:31.050371 | orchestrator | Sunday 22 June 2025 20:03:55 +0000 (0:00:00.128) 0:01:24.827 *********** 2025-06-22 20:04:31.050380 | orchestrator | 2025-06-22 20:04:31.050390 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-22 20:04:31.050399 | orchestrator | Sunday 22 June 2025 20:03:55 +0000 (0:00:00.142) 0:01:24.970 *********** 2025-06-22 20:04:31.050409 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:31.050418 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:04:31.050428 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:04:31.050437 | orchestrator | 2025-06-22 20:04:31.050447 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-22 20:04:31.050457 | orchestrator | Sunday 22 June 2025 20:04:07 +0000 (0:00:11.321) 0:01:36.292 *********** 2025-06-22 20:04:31.050473 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:31.050483 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:04:31.050499 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:04:31.050509 | orchestrator | 2025-06-22 20:04:31.050519 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-22 20:04:31.050529 | orchestrator | Sunday 22 June 2025 20:04:17 +0000 (0:00:10.065) 0:01:46.357 *********** 2025-06-22 20:04:31.050538 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:31.050548 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:04:31.050557 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:04:31.050567 | orchestrator | 2025-06-22 20:04:31.050577 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:04:31.050587 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:04:31.050597 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:04:31.050607 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:04:31.050617 | orchestrator | 2025-06-22 20:04:31.050627 | orchestrator | 2025-06-22 20:04:31.050637 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:04:31.050646 | orchestrator | Sunday 22 June 2025 20:04:27 +0000 (0:00:10.107) 0:01:56.465 *********** 2025-06-22 20:04:31.050656 | orchestrator | =============================================================================== 2025-06-22 20:04:31.050665 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 13.80s 2025-06-22 20:04:31.050675 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.32s 2025-06-22 20:04:31.050685 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.32s 2025-06-22 20:04:31.050694 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.11s 2025-06-22 20:04:31.050704 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.07s 2025-06-22 20:04:31.050713 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.12s 2025-06-22 20:04:31.050723 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.44s 2025-06-22 20:04:31.050732 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.22s 2025-06-22 20:04:31.050742 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.68s 2025-06-22 20:04:31.050751 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.31s 2025-06-22 20:04:31.050761 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.24s 2025-06-22 20:04:31.050770 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.18s 2025-06-22 20:04:31.050780 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.02s 2025-06-22 20:04:31.050789 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 2.89s 2025-06-22 20:04:31.050804 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.76s 2025-06-22 20:04:31.050814 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.57s 2025-06-22 20:04:31.050823 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.47s 2025-06-22 20:04:31.050833 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.12s 2025-06-22 20:04:31.050842 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.51s 2025-06-22 20:04:31.050852 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.38s 2025-06-22 20:04:31.054105 | orchestrator | 2025-06-22 20:04:31 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:31.054234 | orchestrator | 2025-06-22 20:04:31 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:31.054252 | orchestrator | 2025-06-22 20:04:31 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:31.054265 | orchestrator | 2025-06-22 20:04:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:34.081989 | orchestrator | 2025-06-22 20:04:34 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:04:34.083793 | orchestrator | 2025-06-22 20:04:34 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:34.085641 | orchestrator | 2025-06-22 20:04:34 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:34.087593 | orchestrator | 2025-06-22 20:04:34 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:34.087630 | orchestrator | 2025-06-22 20:04:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:37.117973 | orchestrator | 2025-06-22 20:04:37 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:04:37.118186 | orchestrator | 2025-06-22 20:04:37 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:37.118875 | orchestrator | 2025-06-22 20:04:37 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:37.120537 | orchestrator | 2025-06-22 20:04:37 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:37.120562 | orchestrator | 2025-06-22 20:04:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:40.169363 | orchestrator | 2025-06-22 20:04:40 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:04:40.169446 | orchestrator | 2025-06-22 20:04:40 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:40.169622 | orchestrator | 2025-06-22 20:04:40 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:40.170322 | orchestrator | 2025-06-22 20:04:40 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:40.170521 | orchestrator | 2025-06-22 20:04:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:43.217255 | orchestrator | 2025-06-22 20:04:43 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:04:43.218623 | orchestrator | 2025-06-22 20:04:43 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:43.220544 | orchestrator | 2025-06-22 20:04:43 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:43.222096 | orchestrator | 2025-06-22 20:04:43 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:43.222240 | orchestrator | 2025-06-22 20:04:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:46.262087 | orchestrator | 2025-06-22 20:04:46 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:04:46.262246 | orchestrator | 2025-06-22 20:04:46 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:46.263654 | orchestrator | 2025-06-22 20:04:46 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:46.264730 | orchestrator | 2025-06-22 20:04:46 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:46.264973 | orchestrator | 2025-06-22 20:04:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:49.306693 | orchestrator | 2025-06-22 20:04:49 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:04:49.307475 | orchestrator | 2025-06-22 20:04:49 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:49.311431 | orchestrator | 2025-06-22 20:04:49 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:49.312574 | orchestrator | 2025-06-22 20:04:49 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:49.314235 | orchestrator | 2025-06-22 20:04:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:52.361599 | orchestrator | 2025-06-22 20:04:52 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:04:52.363226 | orchestrator | 2025-06-22 20:04:52 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:52.365362 | orchestrator | 2025-06-22 20:04:52 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:52.367460 | orchestrator | 2025-06-22 20:04:52 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:52.367541 | orchestrator | 2025-06-22 20:04:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:55.410648 | orchestrator | 2025-06-22 20:04:55 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:04:55.411738 | orchestrator | 2025-06-22 20:04:55 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:55.413175 | orchestrator | 2025-06-22 20:04:55 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:55.414569 | orchestrator | 2025-06-22 20:04:55 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:55.414606 | orchestrator | 2025-06-22 20:04:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:58.468797 | orchestrator | 2025-06-22 20:04:58 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:04:58.469517 | orchestrator | 2025-06-22 20:04:58 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:04:58.471169 | orchestrator | 2025-06-22 20:04:58 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:04:58.472810 | orchestrator | 2025-06-22 20:04:58 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:04:58.473019 | orchestrator | 2025-06-22 20:04:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:01.524403 | orchestrator | 2025-06-22 20:05:01 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:01.524501 | orchestrator | 2025-06-22 20:05:01 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:01.525372 | orchestrator | 2025-06-22 20:05:01 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:05:01.527400 | orchestrator | 2025-06-22 20:05:01 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:01.527640 | orchestrator | 2025-06-22 20:05:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:04.577452 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:04.578878 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:04.581348 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:05:04.582416 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:04.582647 | orchestrator | 2025-06-22 20:05:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:07.627830 | orchestrator | 2025-06-22 20:05:07 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:07.628715 | orchestrator | 2025-06-22 20:05:07 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:07.629892 | orchestrator | 2025-06-22 20:05:07 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:05:07.630725 | orchestrator | 2025-06-22 20:05:07 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:07.630764 | orchestrator | 2025-06-22 20:05:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:10.674475 | orchestrator | 2025-06-22 20:05:10 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:10.675827 | orchestrator | 2025-06-22 20:05:10 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:10.676506 | orchestrator | 2025-06-22 20:05:10 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:05:10.677836 | orchestrator | 2025-06-22 20:05:10 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:10.677898 | orchestrator | 2025-06-22 20:05:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:13.723440 | orchestrator | 2025-06-22 20:05:13 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:13.725611 | orchestrator | 2025-06-22 20:05:13 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:13.729128 | orchestrator | 2025-06-22 20:05:13 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:05:13.730969 | orchestrator | 2025-06-22 20:05:13 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:13.731105 | orchestrator | 2025-06-22 20:05:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:16.774930 | orchestrator | 2025-06-22 20:05:16 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:16.777263 | orchestrator | 2025-06-22 20:05:16 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:16.779505 | orchestrator | 2025-06-22 20:05:16 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:05:16.782364 | orchestrator | 2025-06-22 20:05:16 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:16.782402 | orchestrator | 2025-06-22 20:05:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:19.834796 | orchestrator | 2025-06-22 20:05:19 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:19.837687 | orchestrator | 2025-06-22 20:05:19 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:19.840752 | orchestrator | 2025-06-22 20:05:19 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:05:19.842589 | orchestrator | 2025-06-22 20:05:19 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:19.843173 | orchestrator | 2025-06-22 20:05:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:22.887787 | orchestrator | 2025-06-22 20:05:22 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:22.889273 | orchestrator | 2025-06-22 20:05:22 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:22.891914 | orchestrator | 2025-06-22 20:05:22 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state STARTED 2025-06-22 20:05:22.894329 | orchestrator | 2025-06-22 20:05:22 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:22.894353 | orchestrator | 2025-06-22 20:05:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:25.967871 | orchestrator | 2025-06-22 20:05:25.968304 | orchestrator | 2025-06-22 20:05:25.968342 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:05:25.968358 | orchestrator | 2025-06-22 20:05:25.968371 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:05:25.968385 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.270) 0:00:00.270 *********** 2025-06-22 20:05:25.968398 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:25.968423 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:25.968436 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:25.968449 | orchestrator | 2025-06-22 20:05:25.968461 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:05:25.968474 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.298) 0:00:00.568 *********** 2025-06-22 20:05:25.968488 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-22 20:05:25.968500 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-22 20:05:25.968513 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-22 20:05:25.968526 | orchestrator | 2025-06-22 20:05:25.968538 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-22 20:05:25.968551 | orchestrator | 2025-06-22 20:05:25.968563 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 20:05:25.968576 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.398) 0:00:00.967 *********** 2025-06-22 20:05:25.968589 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:05:25.968602 | orchestrator | 2025-06-22 20:05:25.968615 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-22 20:05:25.968628 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.571) 0:00:01.538 *********** 2025-06-22 20:05:25.968640 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-22 20:05:25.968652 | orchestrator | 2025-06-22 20:05:25.968665 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-22 20:05:25.968677 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:03.133) 0:00:04.671 *********** 2025-06-22 20:05:25.968690 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-22 20:05:25.968718 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-22 20:05:25.968730 | orchestrator | 2025-06-22 20:05:25.968741 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-22 20:05:25.968752 | orchestrator | Sunday 22 June 2025 20:02:41 +0000 (0:00:05.646) 0:00:10.318 *********** 2025-06-22 20:05:25.968763 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-22 20:05:25.968774 | orchestrator | 2025-06-22 20:05:25.968785 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-22 20:05:25.968796 | orchestrator | Sunday 22 June 2025 20:02:44 +0000 (0:00:02.947) 0:00:13.265 *********** 2025-06-22 20:05:25.968806 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:05:25.968817 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-22 20:05:25.968828 | orchestrator | 2025-06-22 20:05:25.968839 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-22 20:05:25.968850 | orchestrator | Sunday 22 June 2025 20:02:47 +0000 (0:00:03.331) 0:00:16.596 *********** 2025-06-22 20:05:25.968860 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:05:25.968871 | orchestrator | 2025-06-22 20:05:25.968908 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-22 20:05:25.968920 | orchestrator | Sunday 22 June 2025 20:02:50 +0000 (0:00:03.120) 0:00:19.717 *********** 2025-06-22 20:05:25.968930 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-22 20:05:25.968941 | orchestrator | 2025-06-22 20:05:25.968952 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-22 20:05:25.968963 | orchestrator | Sunday 22 June 2025 20:02:54 +0000 (0:00:03.594) 0:00:23.312 *********** 2025-06-22 20:05:25.968977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.969014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.969028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.969070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969327 | orchestrator | 2025-06-22 20:05:25.969414 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-22 20:05:25.969426 | orchestrator | Sunday 22 June 2025 20:02:57 +0000 (0:00:02.555) 0:00:25.867 *********** 2025-06-22 20:05:25.969437 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:25.969448 | orchestrator | 2025-06-22 20:05:25.969459 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-22 20:05:25.969470 | orchestrator | Sunday 22 June 2025 20:02:57 +0000 (0:00:00.143) 0:00:26.011 *********** 2025-06-22 20:05:25.969481 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:25.969502 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:25.969513 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:25.969524 | orchestrator | 2025-06-22 20:05:25.969535 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 20:05:25.969545 | orchestrator | Sunday 22 June 2025 20:02:57 +0000 (0:00:00.279) 0:00:26.290 *********** 2025-06-22 20:05:25.969556 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:05:25.969567 | orchestrator | 2025-06-22 20:05:25.969578 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-22 20:05:25.969589 | orchestrator | Sunday 22 June 2025 20:02:58 +0000 (0:00:00.693) 0:00:26.984 *********** 2025-06-22 20:05:25.969607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.969620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.969637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.969656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.969870 | orchestrator | 2025-06-22 20:05:25.969882 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-22 20:05:25.969892 | orchestrator | Sunday 22 June 2025 20:03:04 +0000 (0:00:06.123) 0:00:33.107 *********** 2025-06-22 20:05:25.969904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.969922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:05:25.969947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.969963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:05:25.969975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.969986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.969998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.970080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {2025-06-22 20:05:25 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:25.970097 | orchestrator | 2025-06-22 20:05:25 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:25.970109 | orchestrator | 2025-06-22 20:05:25 | INFO  | Task 504c28c2-7cb3-4454-bf98-0bd4cf187641 is in state SUCCESS 2025-06-22 20:05:25.970943 | orchestrator | 'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971216 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:25.971230 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:25.971243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.971301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:05:25.971324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971407 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:25.971425 | orchestrator | 2025-06-22 20:05:25.971445 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-22 20:05:25.971466 | orchestrator | Sunday 22 June 2025 20:03:05 +0000 (0:00:01.084) 0:00:34.191 *********** 2025-06-22 20:05:25.971485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.971554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:05:25.971585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971668 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:25.971688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.971732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:05:25.971745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971804 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:25.971817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.971848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:05:25.971860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.971933 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:25.971965 | orchestrator | 2025-06-22 20:05:25.971978 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-22 20:05:25.971990 | orchestrator | Sunday 22 June 2025 20:03:07 +0000 (0:00:02.585) 0:00:36.777 *********** 2025-06-22 20:05:25.972002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.972022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.972038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.972086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972315 | orchestrator | 2025-06-22 20:05:25.972327 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-22 20:05:25.972338 | orchestrator | Sunday 22 June 2025 20:03:14 +0000 (0:00:06.430) 0:00:43.207 *********** 2025-06-22 20:05:25.972350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.972369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.972387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.972399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.972670 | orchestrator | 2025-06-22 20:05:25.972689 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-22 20:05:25.972707 | orchestrator | Sunday 22 June 2025 20:03:32 +0000 (0:00:18.516) 0:01:01.724 *********** 2025-06-22 20:05:25.972726 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-22 20:05:25.972745 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-22 20:05:25.972763 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-22 20:05:25.972781 | orchestrator | 2025-06-22 20:05:25.972798 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-22 20:05:25.972815 | orchestrator | Sunday 22 June 2025 20:03:39 +0000 (0:00:06.966) 0:01:08.691 *********** 2025-06-22 20:05:25.972834 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-22 20:05:25.972851 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-22 20:05:25.972871 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-22 20:05:25.972890 | orchestrator | 2025-06-22 20:05:25.972907 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-22 20:05:25.972923 | orchestrator | Sunday 22 June 2025 20:03:43 +0000 (0:00:03.489) 0:01:12.180 *********** 2025-06-22 20:05:25.972947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.972968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.972989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.973001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.973013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.973101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.973147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.973253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.973265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.973276 | orchestrator | 2025-06-22 20:05:25.973288 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-22 20:05:25.973299 | orchestrator | Sunday 22 June 2025 20:03:46 +0000 (0:00:02.878) 0:01:15.058 *********** 2025-06-22 20:05:25.973317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.973331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.973360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.973372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.973384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.973451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.973518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.973618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.973638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.973651 | orchestrator | 2025-06-22 20:05:25.973662 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 20:05:25.973675 | orchestrator | Sunday 22 June 2025 20:03:48 +0000 (0:00:02.613) 0:01:17.672 *********** 2025-06-22 20:05:25.973686 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:25.973698 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:25.973709 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:25.973720 | orchestrator | 2025-06-22 20:05:25.973731 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-22 20:05:25.973742 | orchestrator | Sunday 22 June 2025 20:03:49 +0000 (0:00:00.692) 0:01:18.365 *********** 2025-06-22 20:05:25.973761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.973781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:05:25.973798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973852 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:25.973879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.973911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:05:25.973938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.973995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.974259 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:25.974308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:05:25.974335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:05:25.974355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.974367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.974378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.974390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:05:25.974401 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:25.974420 | orchestrator | 2025-06-22 20:05:25.974431 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-22 20:05:25.974443 | orchestrator | Sunday 22 June 2025 20:03:50 +0000 (0:00:00.708) 0:01:19.074 *********** 2025-06-22 20:05:25.974461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.974474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.974502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:05:25.974515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:05:25.974851 | orchestrator | 2025-06-22 20:05:25.974864 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 20:05:25.974876 | orchestrator | Sunday 22 June 2025 20:03:54 +0000 (0:00:04.206) 0:01:23.280 *********** 2025-06-22 20:05:25.974887 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:25.974899 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:25.974910 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:25.974921 | orchestrator | 2025-06-22 20:05:25.974932 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-22 20:05:25.974943 | orchestrator | Sunday 22 June 2025 20:03:54 +0000 (0:00:00.424) 0:01:23.704 *********** 2025-06-22 20:05:25.974957 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-22 20:05:25.974977 | orchestrator | 2025-06-22 20:05:25.974988 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-22 20:05:25.974999 | orchestrator | Sunday 22 June 2025 20:03:57 +0000 (0:00:02.351) 0:01:26.056 *********** 2025-06-22 20:05:25.975010 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:05:25.975021 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-22 20:05:25.975031 | orchestrator | 2025-06-22 20:05:25.975071 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-22 20:05:25.975100 | orchestrator | Sunday 22 June 2025 20:03:59 +0000 (0:00:02.143) 0:01:28.199 *********** 2025-06-22 20:05:25.975113 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:25.975124 | orchestrator | 2025-06-22 20:05:25.975135 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-22 20:05:25.975146 | orchestrator | Sunday 22 June 2025 20:04:13 +0000 (0:00:14.451) 0:01:42.650 *********** 2025-06-22 20:05:25.975160 | orchestrator | 2025-06-22 20:05:25.975179 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-22 20:05:25.975197 | orchestrator | Sunday 22 June 2025 20:04:13 +0000 (0:00:00.061) 0:01:42.712 *********** 2025-06-22 20:05:25.975216 | orchestrator | 2025-06-22 20:05:25.975234 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-22 20:05:25.975251 | orchestrator | Sunday 22 June 2025 20:04:14 +0000 (0:00:00.082) 0:01:42.795 *********** 2025-06-22 20:05:25.975269 | orchestrator | 2025-06-22 20:05:25.975286 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-22 20:05:25.975305 | orchestrator | Sunday 22 June 2025 20:04:14 +0000 (0:00:00.067) 0:01:42.863 *********** 2025-06-22 20:05:25.975323 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:25.975342 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:25.975360 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:25.975380 | orchestrator | 2025-06-22 20:05:25.975398 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-22 20:05:25.975417 | orchestrator | Sunday 22 June 2025 20:04:27 +0000 (0:00:13.338) 0:01:56.202 *********** 2025-06-22 20:05:25.975435 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:25.975448 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:25.975459 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:25.975469 | orchestrator | 2025-06-22 20:05:25.975480 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-22 20:05:25.975499 | orchestrator | Sunday 22 June 2025 20:04:35 +0000 (0:00:08.053) 0:02:04.256 *********** 2025-06-22 20:05:25.975510 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:25.975521 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:25.975532 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:25.975543 | orchestrator | 2025-06-22 20:05:25.975563 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-22 20:05:25.975574 | orchestrator | Sunday 22 June 2025 20:04:47 +0000 (0:00:11.985) 0:02:16.242 *********** 2025-06-22 20:05:25.975584 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:25.975595 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:25.975606 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:25.975624 | orchestrator | 2025-06-22 20:05:25.975642 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-22 20:05:25.975659 | orchestrator | Sunday 22 June 2025 20:04:57 +0000 (0:00:10.075) 0:02:26.317 *********** 2025-06-22 20:05:25.975677 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:25.975695 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:25.975715 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:25.975735 | orchestrator | 2025-06-22 20:05:25.975753 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-22 20:05:25.975770 | orchestrator | Sunday 22 June 2025 20:05:06 +0000 (0:00:08.765) 0:02:35.083 *********** 2025-06-22 20:05:25.975781 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:25.975792 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:25.975802 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:25.975813 | orchestrator | 2025-06-22 20:05:25.975825 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-22 20:05:25.975835 | orchestrator | Sunday 22 June 2025 20:05:17 +0000 (0:00:11.609) 0:02:46.693 *********** 2025-06-22 20:05:25.975846 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:25.975857 | orchestrator | 2025-06-22 20:05:25.975868 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:05:25.975880 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:05:25.975893 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:05:25.975904 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:05:25.975915 | orchestrator | 2025-06-22 20:05:25.975926 | orchestrator | 2025-06-22 20:05:25.975937 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:05:25.975948 | orchestrator | Sunday 22 June 2025 20:05:25 +0000 (0:00:07.274) 0:02:53.967 *********** 2025-06-22 20:05:25.975959 | orchestrator | =============================================================================== 2025-06-22 20:05:25.975970 | orchestrator | designate : Copying over designate.conf -------------------------------- 18.52s 2025-06-22 20:05:25.975981 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.45s 2025-06-22 20:05:25.975991 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.34s 2025-06-22 20:05:25.976002 | orchestrator | designate : Restart designate-central container ------------------------ 11.99s 2025-06-22 20:05:25.976013 | orchestrator | designate : Restart designate-worker container ------------------------- 11.61s 2025-06-22 20:05:25.976024 | orchestrator | designate : Restart designate-producer container ----------------------- 10.08s 2025-06-22 20:05:25.976034 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.77s 2025-06-22 20:05:25.976121 | orchestrator | designate : Restart designate-api container ----------------------------- 8.05s 2025-06-22 20:05:25.976137 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.27s 2025-06-22 20:05:25.976148 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.97s 2025-06-22 20:05:25.976170 | orchestrator | designate : Copying over config.json files for services ----------------- 6.43s 2025-06-22 20:05:25.976181 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.12s 2025-06-22 20:05:25.976192 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 5.65s 2025-06-22 20:05:25.976212 | orchestrator | designate : Check designate containers ---------------------------------- 4.21s 2025-06-22 20:05:25.976223 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.59s 2025-06-22 20:05:25.976234 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.49s 2025-06-22 20:05:25.976244 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.33s 2025-06-22 20:05:25.976254 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.13s 2025-06-22 20:05:25.976263 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.12s 2025-06-22 20:05:25.976273 | orchestrator | service-ks-register : designate | Creating projects --------------------- 2.95s 2025-06-22 20:05:25.976283 | orchestrator | 2025-06-22 20:05:25 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:25.976293 | orchestrator | 2025-06-22 20:05:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:29.038783 | orchestrator | 2025-06-22 20:05:29 | INFO  | Task f938d457-0451-4756-9e45-c87970ec33b8 is in state STARTED 2025-06-22 20:05:29.038902 | orchestrator | 2025-06-22 20:05:29 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:29.038950 | orchestrator | 2025-06-22 20:05:29 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:29.042784 | orchestrator | 2025-06-22 20:05:29 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:05:29.042863 | orchestrator | 2025-06-22 20:05:29 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:29.042886 | orchestrator | 2025-06-22 20:05:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:32.097155 | orchestrator | 2025-06-22 20:05:32 | INFO  | Task f938d457-0451-4756-9e45-c87970ec33b8 is in state STARTED 2025-06-22 20:05:32.098453 | orchestrator | 2025-06-22 20:05:32 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:32.102783 | orchestrator | 2025-06-22 20:05:32 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:32.108427 | orchestrator | 2025-06-22 20:05:32 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:05:32.112233 | orchestrator | 2025-06-22 20:05:32 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:32.112269 | orchestrator | 2025-06-22 20:05:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:35.156204 | orchestrator | 2025-06-22 20:05:35 | INFO  | Task f938d457-0451-4756-9e45-c87970ec33b8 is in state STARTED 2025-06-22 20:05:35.156778 | orchestrator | 2025-06-22 20:05:35 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:35.157530 | orchestrator | 2025-06-22 20:05:35 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:35.158703 | orchestrator | 2025-06-22 20:05:35 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:05:35.159518 | orchestrator | 2025-06-22 20:05:35 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:35.159701 | orchestrator | 2025-06-22 20:05:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:38.216105 | orchestrator | 2025-06-22 20:05:38 | INFO  | Task f938d457-0451-4756-9e45-c87970ec33b8 is in state STARTED 2025-06-22 20:05:38.217155 | orchestrator | 2025-06-22 20:05:38 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state STARTED 2025-06-22 20:05:38.220880 | orchestrator | 2025-06-22 20:05:38 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:38.222384 | orchestrator | 2025-06-22 20:05:38 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:05:38.223434 | orchestrator | 2025-06-22 20:05:38 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:38.223850 | orchestrator | 2025-06-22 20:05:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:41.267573 | orchestrator | 2025-06-22 20:05:41 | INFO  | Task f938d457-0451-4756-9e45-c87970ec33b8 is in state STARTED 2025-06-22 20:05:41.268357 | orchestrator | 2025-06-22 20:05:41 | INFO  | Task e32c3f0c-0786-4282-8ebe-579c2f2ca8d9 is in state SUCCESS 2025-06-22 20:05:41.269716 | orchestrator | 2025-06-22 20:05:41.269759 | orchestrator | 2025-06-22 20:05:41.269777 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:05:41.269796 | orchestrator | 2025-06-22 20:05:41.269812 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:05:41.269829 | orchestrator | Sunday 22 June 2025 20:04:34 +0000 (0:00:00.244) 0:00:00.244 *********** 2025-06-22 20:05:41.269846 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:41.269863 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:41.269880 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:41.269897 | orchestrator | 2025-06-22 20:05:41.269914 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:05:41.269931 | orchestrator | Sunday 22 June 2025 20:04:34 +0000 (0:00:00.263) 0:00:00.507 *********** 2025-06-22 20:05:41.269948 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-22 20:05:41.269965 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-22 20:05:41.269981 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-22 20:05:41.269998 | orchestrator | 2025-06-22 20:05:41.270013 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-22 20:05:41.270121 | orchestrator | 2025-06-22 20:05:41.270138 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-22 20:05:41.270156 | orchestrator | Sunday 22 June 2025 20:04:34 +0000 (0:00:00.321) 0:00:00.828 *********** 2025-06-22 20:05:41.270171 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:05:41.270187 | orchestrator | 2025-06-22 20:05:41.270203 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-22 20:05:41.270219 | orchestrator | Sunday 22 June 2025 20:04:35 +0000 (0:00:00.943) 0:00:01.771 *********** 2025-06-22 20:05:41.270235 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-22 20:05:41.270251 | orchestrator | 2025-06-22 20:05:41.270284 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-22 20:05:41.270301 | orchestrator | Sunday 22 June 2025 20:04:39 +0000 (0:00:04.035) 0:00:05.807 *********** 2025-06-22 20:05:41.270318 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-22 20:05:41.270425 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-22 20:05:41.270446 | orchestrator | 2025-06-22 20:05:41.270463 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-22 20:05:41.270479 | orchestrator | Sunday 22 June 2025 20:04:46 +0000 (0:00:06.636) 0:00:12.444 *********** 2025-06-22 20:05:41.270496 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:05:41.270513 | orchestrator | 2025-06-22 20:05:41.270529 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-22 20:05:41.270546 | orchestrator | Sunday 22 June 2025 20:04:49 +0000 (0:00:03.206) 0:00:15.650 *********** 2025-06-22 20:05:41.270562 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:05:41.270579 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-22 20:05:41.270595 | orchestrator | 2025-06-22 20:05:41.270633 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-22 20:05:41.270651 | orchestrator | Sunday 22 June 2025 20:04:52 +0000 (0:00:03.389) 0:00:19.040 *********** 2025-06-22 20:05:41.270668 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:05:41.270685 | orchestrator | 2025-06-22 20:05:41.270702 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-22 20:05:41.270720 | orchestrator | Sunday 22 June 2025 20:04:55 +0000 (0:00:02.822) 0:00:21.863 *********** 2025-06-22 20:05:41.270737 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-22 20:05:41.270753 | orchestrator | 2025-06-22 20:05:41.270770 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-22 20:05:41.270785 | orchestrator | Sunday 22 June 2025 20:04:59 +0000 (0:00:03.766) 0:00:25.629 *********** 2025-06-22 20:05:41.270802 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:41.270819 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:41.270835 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:41.270851 | orchestrator | 2025-06-22 20:05:41.270868 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-22 20:05:41.270885 | orchestrator | Sunday 22 June 2025 20:04:59 +0000 (0:00:00.350) 0:00:25.980 *********** 2025-06-22 20:05:41.270905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.270945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.270973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.271005 | orchestrator | 2025-06-22 20:05:41.271023 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-22 20:05:41.271115 | orchestrator | Sunday 22 June 2025 20:05:00 +0000 (0:00:00.837) 0:00:26.817 *********** 2025-06-22 20:05:41.271140 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:41.271158 | orchestrator | 2025-06-22 20:05:41.271177 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-22 20:05:41.271194 | orchestrator | Sunday 22 June 2025 20:05:00 +0000 (0:00:00.128) 0:00:26.946 *********** 2025-06-22 20:05:41.271212 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:41.271230 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:41.271246 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:41.271263 | orchestrator | 2025-06-22 20:05:41.271281 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-22 20:05:41.271300 | orchestrator | Sunday 22 June 2025 20:05:01 +0000 (0:00:00.587) 0:00:27.534 *********** 2025-06-22 20:05:41.271317 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:05:41.271335 | orchestrator | 2025-06-22 20:05:41.271352 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-22 20:05:41.271369 | orchestrator | Sunday 22 June 2025 20:05:01 +0000 (0:00:00.502) 0:00:28.036 *********** 2025-06-22 20:05:41.271387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.271422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.271443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.271474 | orchestrator | 2025-06-22 20:05:41.271500 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-22 20:05:41.271519 | orchestrator | Sunday 22 June 2025 20:05:03 +0000 (0:00:01.456) 0:00:29.493 *********** 2025-06-22 20:05:41.271537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:05:41.271556 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:41.271574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:05:41.271593 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:41.271621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:05:41.271640 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:41.271656 | orchestrator | 2025-06-22 20:05:41.271673 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-22 20:05:41.271692 | orchestrator | Sunday 22 June 2025 20:05:04 +0000 (0:00:00.719) 0:00:30.212 *********** 2025-06-22 20:05:41.271710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:05:41.271745 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:41.271763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:05:41.271781 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:41.271800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:05:41.271818 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:41.271835 | orchestrator | 2025-06-22 20:05:41.271853 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-22 20:05:41.271871 | orchestrator | Sunday 22 June 2025 20:05:04 +0000 (0:00:00.694) 0:00:30.907 *********** 2025-06-22 20:05:41.271899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.271919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.271968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.271987 | orchestrator | 2025-06-22 20:05:41.272005 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-22 20:05:41.272022 | orchestrator | Sunday 22 June 2025 20:05:06 +0000 (0:00:01.296) 0:00:32.203 *********** 2025-06-22 20:05:41.272072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.272092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.272121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.272149 | orchestrator | 2025-06-22 20:05:41.272166 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-22 20:05:41.272183 | orchestrator | Sunday 22 June 2025 20:05:09 +0000 (0:00:02.987) 0:00:35.191 *********** 2025-06-22 20:05:41.272199 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-22 20:05:41.272217 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-22 20:05:41.272241 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-22 20:05:41.272258 | orchestrator | 2025-06-22 20:05:41.272275 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-22 20:05:41.272292 | orchestrator | Sunday 22 June 2025 20:05:10 +0000 (0:00:01.407) 0:00:36.598 *********** 2025-06-22 20:05:41.272308 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:41.272324 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:41.272340 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:41.272356 | orchestrator | 2025-06-22 20:05:41.272372 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-22 20:05:41.272388 | orchestrator | Sunday 22 June 2025 20:05:11 +0000 (0:00:01.310) 0:00:37.909 *********** 2025-06-22 20:05:41.272405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:05:41.272422 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:41.272439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:05:41.272457 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:41.272495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:05:41.272516 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:41.272533 | orchestrator | 2025-06-22 20:05:41.272550 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-22 20:05:41.272568 | orchestrator | Sunday 22 June 2025 20:05:12 +0000 (0:00:00.430) 0:00:38.340 *********** 2025-06-22 20:05:41.272593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.272612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.272630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:05:41.272663 | orchestrator | 2025-06-22 20:05:41.272680 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-22 20:05:41.272697 | orchestrator | Sunday 22 June 2025 20:05:13 +0000 (0:00:01.323) 0:00:39.664 *********** 2025-06-22 20:05:41.272715 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:41.272731 | orchestrator | 2025-06-22 20:05:41.272747 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-22 20:05:41.272764 | orchestrator | Sunday 22 June 2025 20:05:15 +0000 (0:00:02.191) 0:00:41.855 *********** 2025-06-22 20:05:41.272780 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:41.272797 | orchestrator | 2025-06-22 20:05:41.272814 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-22 20:05:41.272831 | orchestrator | Sunday 22 June 2025 20:05:18 +0000 (0:00:02.313) 0:00:44.169 *********** 2025-06-22 20:05:41.272857 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:41.272875 | orchestrator | 2025-06-22 20:05:41.272893 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-22 20:05:41.272910 | orchestrator | Sunday 22 June 2025 20:05:32 +0000 (0:00:13.983) 0:00:58.152 *********** 2025-06-22 20:05:41.272927 | orchestrator | 2025-06-22 20:05:41.272943 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-22 20:05:41.272959 | orchestrator | Sunday 22 June 2025 20:05:32 +0000 (0:00:00.142) 0:00:58.298 *********** 2025-06-22 20:05:41.272977 | orchestrator | 2025-06-22 20:05:41.272993 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-22 20:05:41.273010 | orchestrator | Sunday 22 June 2025 20:05:32 +0000 (0:00:00.149) 0:00:58.447 *********** 2025-06-22 20:05:41.273026 | orchestrator | 2025-06-22 20:05:41.273067 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-22 20:05:41.273085 | orchestrator | Sunday 22 June 2025 20:05:32 +0000 (0:00:00.151) 0:00:58.598 *********** 2025-06-22 20:05:41.273103 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:41.273121 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:41.273140 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:41.273157 | orchestrator | 2025-06-22 20:05:41.273175 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:05:41.273193 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:05:41.273211 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:05:41.273237 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:05:41.273256 | orchestrator | 2025-06-22 20:05:41.273274 | orchestrator | 2025-06-22 20:05:41.273292 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:05:41.273309 | orchestrator | Sunday 22 June 2025 20:05:38 +0000 (0:00:06.023) 0:01:04.622 *********** 2025-06-22 20:05:41.273325 | orchestrator | =============================================================================== 2025-06-22 20:05:41.273343 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.98s 2025-06-22 20:05:41.273360 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.64s 2025-06-22 20:05:41.273377 | orchestrator | placement : Restart placement-api container ----------------------------- 6.02s 2025-06-22 20:05:41.273395 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.04s 2025-06-22 20:05:41.273412 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.77s 2025-06-22 20:05:41.273428 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.39s 2025-06-22 20:05:41.273445 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.21s 2025-06-22 20:05:41.273462 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.99s 2025-06-22 20:05:41.273495 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 2.82s 2025-06-22 20:05:41.273512 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.31s 2025-06-22 20:05:41.273529 | orchestrator | placement : Creating placement databases -------------------------------- 2.19s 2025-06-22 20:05:41.273546 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.46s 2025-06-22 20:05:41.273563 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.41s 2025-06-22 20:05:41.273580 | orchestrator | placement : Check placement containers ---------------------------------- 1.32s 2025-06-22 20:05:41.273598 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.31s 2025-06-22 20:05:41.273615 | orchestrator | placement : Copying over config.json files for services ----------------- 1.30s 2025-06-22 20:05:41.273632 | orchestrator | placement : include_tasks ----------------------------------------------- 0.94s 2025-06-22 20:05:41.273649 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.84s 2025-06-22 20:05:41.273665 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.72s 2025-06-22 20:05:41.273682 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.69s 2025-06-22 20:05:41.273698 | orchestrator | 2025-06-22 20:05:41 | INFO  | Task cf094771-2a80-4622-9a94-8293032d225a is in state STARTED 2025-06-22 20:05:41.273716 | orchestrator | 2025-06-22 20:05:41 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:41.273732 | orchestrator | 2025-06-22 20:05:41 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:05:41.273748 | orchestrator | 2025-06-22 20:05:41 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:41.273765 | orchestrator | 2025-06-22 20:05:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:44.320595 | orchestrator | 2025-06-22 20:05:44 | INFO  | Task f938d457-0451-4756-9e45-c87970ec33b8 is in state STARTED 2025-06-22 20:05:44.322714 | orchestrator | 2025-06-22 20:05:44 | INFO  | Task cf094771-2a80-4622-9a94-8293032d225a is in state SUCCESS 2025-06-22 20:05:44.324215 | orchestrator | 2025-06-22 20:05:44 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:44.325982 | orchestrator | 2025-06-22 20:05:44 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:05:44.328184 | orchestrator | 2025-06-22 20:05:44 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:44.328419 | orchestrator | 2025-06-22 20:05:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:47.382967 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task f938d457-0451-4756-9e45-c87970ec33b8 is in state SUCCESS 2025-06-22 20:05:47.384824 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:05:47.386506 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:47.388308 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:05:47.389273 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:47.389318 | orchestrator | 2025-06-22 20:05:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:50.447216 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:05:50.448173 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:50.449262 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:05:50.450452 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:50.450491 | orchestrator | 2025-06-22 20:05:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:53.497186 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:05:53.498862 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:53.500594 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:05:53.502980 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:53.503304 | orchestrator | 2025-06-22 20:05:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:56.551410 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:05:56.554244 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:56.558180 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:05:56.559494 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:56.559525 | orchestrator | 2025-06-22 20:05:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:59.597101 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:05:59.597414 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:05:59.598495 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:05:59.599767 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:05:59.599838 | orchestrator | 2025-06-22 20:05:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:02.639295 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:02.639383 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:02.639513 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:02.641073 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:02.641503 | orchestrator | 2025-06-22 20:06:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:05.668458 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:05.668840 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:05.669888 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:05.670567 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:05.670960 | orchestrator | 2025-06-22 20:06:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:08.715560 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:08.716436 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:08.718964 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:08.722829 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:08.722894 | orchestrator | 2025-06-22 20:06:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:11.766573 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:11.770417 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:11.773493 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:11.776248 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:11.776273 | orchestrator | 2025-06-22 20:06:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:14.822327 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:14.822447 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:14.822592 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:14.823437 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:14.823467 | orchestrator | 2025-06-22 20:06:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:17.865303 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:17.865383 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:17.865520 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:17.866202 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:17.866401 | orchestrator | 2025-06-22 20:06:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:20.894515 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:20.895919 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:20.897266 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:20.898506 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:20.899012 | orchestrator | 2025-06-22 20:06:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:23.937944 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:23.938870 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:23.944506 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:23.946305 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:23.946363 | orchestrator | 2025-06-22 20:06:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:26.974319 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:26.974403 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:26.974557 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:26.975540 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:26.975567 | orchestrator | 2025-06-22 20:06:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:30.003961 | orchestrator | 2025-06-22 20:06:30 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:30.004719 | orchestrator | 2025-06-22 20:06:30 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:30.005345 | orchestrator | 2025-06-22 20:06:30 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:30.007867 | orchestrator | 2025-06-22 20:06:30 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:30.007921 | orchestrator | 2025-06-22 20:06:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:33.045339 | orchestrator | 2025-06-22 20:06:33 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:33.047531 | orchestrator | 2025-06-22 20:06:33 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:33.049932 | orchestrator | 2025-06-22 20:06:33 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:33.051592 | orchestrator | 2025-06-22 20:06:33 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:33.051630 | orchestrator | 2025-06-22 20:06:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:36.098849 | orchestrator | 2025-06-22 20:06:36 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:36.100224 | orchestrator | 2025-06-22 20:06:36 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:36.101431 | orchestrator | 2025-06-22 20:06:36 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:36.102500 | orchestrator | 2025-06-22 20:06:36 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:36.102526 | orchestrator | 2025-06-22 20:06:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:39.145350 | orchestrator | 2025-06-22 20:06:39 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:39.147230 | orchestrator | 2025-06-22 20:06:39 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:39.149272 | orchestrator | 2025-06-22 20:06:39 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:39.151094 | orchestrator | 2025-06-22 20:06:39 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:39.151125 | orchestrator | 2025-06-22 20:06:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:42.189188 | orchestrator | 2025-06-22 20:06:42 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:42.189266 | orchestrator | 2025-06-22 20:06:42 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:42.189542 | orchestrator | 2025-06-22 20:06:42 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:42.191819 | orchestrator | 2025-06-22 20:06:42 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:42.191858 | orchestrator | 2025-06-22 20:06:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:45.236394 | orchestrator | 2025-06-22 20:06:45 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:45.239419 | orchestrator | 2025-06-22 20:06:45 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:45.241368 | orchestrator | 2025-06-22 20:06:45 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:45.245504 | orchestrator | 2025-06-22 20:06:45 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:45.245556 | orchestrator | 2025-06-22 20:06:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:48.296190 | orchestrator | 2025-06-22 20:06:48 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:48.296369 | orchestrator | 2025-06-22 20:06:48 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:48.297468 | orchestrator | 2025-06-22 20:06:48 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:48.298842 | orchestrator | 2025-06-22 20:06:48 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:48.298885 | orchestrator | 2025-06-22 20:06:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:51.328102 | orchestrator | 2025-06-22 20:06:51 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:51.328491 | orchestrator | 2025-06-22 20:06:51 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:51.329618 | orchestrator | 2025-06-22 20:06:51 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:51.330160 | orchestrator | 2025-06-22 20:06:51 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:51.330179 | orchestrator | 2025-06-22 20:06:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:54.382163 | orchestrator | 2025-06-22 20:06:54 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:54.382314 | orchestrator | 2025-06-22 20:06:54 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:54.383172 | orchestrator | 2025-06-22 20:06:54 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:54.383868 | orchestrator | 2025-06-22 20:06:54 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:54.383898 | orchestrator | 2025-06-22 20:06:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:57.424714 | orchestrator | 2025-06-22 20:06:57 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:06:57.424913 | orchestrator | 2025-06-22 20:06:57 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:06:57.428571 | orchestrator | 2025-06-22 20:06:57 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:06:57.430217 | orchestrator | 2025-06-22 20:06:57 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:06:57.430776 | orchestrator | 2025-06-22 20:06:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:00.471288 | orchestrator | 2025-06-22 20:07:00 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:00.471519 | orchestrator | 2025-06-22 20:07:00 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:00.472187 | orchestrator | 2025-06-22 20:07:00 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:07:00.472744 | orchestrator | 2025-06-22 20:07:00 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state STARTED 2025-06-22 20:07:00.472768 | orchestrator | 2025-06-22 20:07:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:03.510202 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:03.512878 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:03.513069 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task 680724de-1d98-4881-ad27-128eee71192d is in state STARTED 2025-06-22 20:07:03.513556 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:07:03.520684 | orchestrator | 2025-06-22 20:07:03.520727 | orchestrator | 2025-06-22 20:07:03.520737 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:07:03.520747 | orchestrator | 2025-06-22 20:07:03.520756 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:07:03.520765 | orchestrator | Sunday 22 June 2025 20:05:42 +0000 (0:00:00.158) 0:00:00.158 *********** 2025-06-22 20:07:03.520795 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:07:03.520806 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:07:03.520814 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:07:03.520823 | orchestrator | 2025-06-22 20:07:03.520832 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:07:03.520841 | orchestrator | Sunday 22 June 2025 20:05:42 +0000 (0:00:00.279) 0:00:00.438 *********** 2025-06-22 20:07:03.520850 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-22 20:07:03.520859 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-22 20:07:03.520868 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-22 20:07:03.520877 | orchestrator | 2025-06-22 20:07:03.520886 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-22 20:07:03.520912 | orchestrator | 2025-06-22 20:07:03.520922 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-22 20:07:03.520931 | orchestrator | Sunday 22 June 2025 20:05:43 +0000 (0:00:00.619) 0:00:01.058 *********** 2025-06-22 20:07:03.520939 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:07:03.520948 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:07:03.520957 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:07:03.520966 | orchestrator | 2025-06-22 20:07:03.520974 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:07:03.520984 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:07:03.520993 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:07:03.521002 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:07:03.521011 | orchestrator | 2025-06-22 20:07:03.521020 | orchestrator | 2025-06-22 20:07:03.521029 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:07:03.521066 | orchestrator | Sunday 22 June 2025 20:05:43 +0000 (0:00:00.609) 0:00:01.667 *********** 2025-06-22 20:07:03.521075 | orchestrator | =============================================================================== 2025-06-22 20:07:03.521084 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-06-22 20:07:03.521182 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.61s 2025-06-22 20:07:03.521195 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-06-22 20:07:03.521203 | orchestrator | 2025-06-22 20:07:03.521212 | orchestrator | None 2025-06-22 20:07:03.521221 | orchestrator | 2025-06-22 20:07:03.521230 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:07:03.521238 | orchestrator | 2025-06-22 20:07:03.521247 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:07:03.521256 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.281) 0:00:00.281 *********** 2025-06-22 20:07:03.521264 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:07:03.521274 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:07:03.521294 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:07:03.521305 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:07:03.521316 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:07:03.521326 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:07:03.521336 | orchestrator | 2025-06-22 20:07:03.521346 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:07:03.521356 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.618) 0:00:00.900 *********** 2025-06-22 20:07:03.521367 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-22 20:07:03.521377 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-22 20:07:03.521387 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-22 20:07:03.521397 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-22 20:07:03.521408 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-22 20:07:03.521418 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-22 20:07:03.521427 | orchestrator | 2025-06-22 20:07:03.521437 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-22 20:07:03.521447 | orchestrator | 2025-06-22 20:07:03.521458 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 20:07:03.521468 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.585) 0:00:01.485 *********** 2025-06-22 20:07:03.521478 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:07:03.521490 | orchestrator | 2025-06-22 20:07:03.521500 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-22 20:07:03.521510 | orchestrator | Sunday 22 June 2025 20:02:33 +0000 (0:00:00.877) 0:00:02.363 *********** 2025-06-22 20:07:03.521520 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:07:03.521529 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:07:03.521538 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:07:03.521547 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:07:03.521555 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:07:03.521575 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:07:03.521585 | orchestrator | 2025-06-22 20:07:03.521594 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-22 20:07:03.521602 | orchestrator | Sunday 22 June 2025 20:02:34 +0000 (0:00:00.980) 0:00:03.343 *********** 2025-06-22 20:07:03.521611 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:07:03.521620 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:07:03.521628 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:07:03.521637 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:07:03.521646 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:07:03.521667 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:07:03.521716 | orchestrator | 2025-06-22 20:07:03.521727 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-22 20:07:03.521736 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:00.906) 0:00:04.250 *********** 2025-06-22 20:07:03.521744 | orchestrator | ok: [testbed-node-0] => { 2025-06-22 20:07:03.521754 | orchestrator |  "changed": false, 2025-06-22 20:07:03.521762 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:07:03.521778 | orchestrator | } 2025-06-22 20:07:03.521787 | orchestrator | ok: [testbed-node-1] => { 2025-06-22 20:07:03.521796 | orchestrator |  "changed": false, 2025-06-22 20:07:03.521805 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:07:03.521814 | orchestrator | } 2025-06-22 20:07:03.521822 | orchestrator | ok: [testbed-node-2] => { 2025-06-22 20:07:03.521831 | orchestrator |  "changed": false, 2025-06-22 20:07:03.521840 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:07:03.521849 | orchestrator | } 2025-06-22 20:07:03.521857 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 20:07:03.521866 | orchestrator |  "changed": false, 2025-06-22 20:07:03.521875 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:07:03.521883 | orchestrator | } 2025-06-22 20:07:03.521892 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 20:07:03.521907 | orchestrator |  "changed": false, 2025-06-22 20:07:03.521922 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:07:03.521939 | orchestrator | } 2025-06-22 20:07:03.521952 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 20:07:03.521966 | orchestrator |  "changed": false, 2025-06-22 20:07:03.521981 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:07:03.521994 | orchestrator | } 2025-06-22 20:07:03.522009 | orchestrator | 2025-06-22 20:07:03.522097 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-22 20:07:03.522114 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:00.635) 0:00:04.885 *********** 2025-06-22 20:07:03.522128 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.522137 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.522145 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.522154 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.522162 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.522171 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.522179 | orchestrator | 2025-06-22 20:07:03.522188 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-22 20:07:03.522197 | orchestrator | Sunday 22 June 2025 20:02:36 +0000 (0:00:00.530) 0:00:05.416 *********** 2025-06-22 20:07:03.522206 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-22 20:07:03.522215 | orchestrator | 2025-06-22 20:07:03.522223 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-22 20:07:03.522232 | orchestrator | Sunday 22 June 2025 20:02:39 +0000 (0:00:03.078) 0:00:08.495 *********** 2025-06-22 20:07:03.522241 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-22 20:07:03.522250 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-22 20:07:03.522259 | orchestrator | 2025-06-22 20:07:03.522268 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-22 20:07:03.522276 | orchestrator | Sunday 22 June 2025 20:02:45 +0000 (0:00:05.454) 0:00:13.949 *********** 2025-06-22 20:07:03.522285 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:07:03.522294 | orchestrator | 2025-06-22 20:07:03.522303 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-22 20:07:03.522311 | orchestrator | Sunday 22 June 2025 20:02:47 +0000 (0:00:02.771) 0:00:16.720 *********** 2025-06-22 20:07:03.522326 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:07:03.522335 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-22 20:07:03.522343 | orchestrator | 2025-06-22 20:07:03.522352 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-22 20:07:03.522360 | orchestrator | Sunday 22 June 2025 20:02:51 +0000 (0:00:03.269) 0:00:19.990 *********** 2025-06-22 20:07:03.522370 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:07:03.522385 | orchestrator | 2025-06-22 20:07:03.522400 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-22 20:07:03.522414 | orchestrator | Sunday 22 June 2025 20:02:54 +0000 (0:00:03.066) 0:00:23.057 *********** 2025-06-22 20:07:03.522440 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-22 20:07:03.522455 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-22 20:07:03.522470 | orchestrator | 2025-06-22 20:07:03.522479 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 20:07:03.522488 | orchestrator | Sunday 22 June 2025 20:03:01 +0000 (0:00:07.593) 0:00:30.650 *********** 2025-06-22 20:07:03.522497 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.522505 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.522514 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.522522 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.522531 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.522539 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.522548 | orchestrator | 2025-06-22 20:07:03.522556 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-22 20:07:03.522565 | orchestrator | Sunday 22 June 2025 20:03:02 +0000 (0:00:00.748) 0:00:31.399 *********** 2025-06-22 20:07:03.522573 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.522582 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.522591 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.522599 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.522607 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.522616 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.522625 | orchestrator | 2025-06-22 20:07:03.522633 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-22 20:07:03.522642 | orchestrator | Sunday 22 June 2025 20:03:04 +0000 (0:00:02.018) 0:00:33.417 *********** 2025-06-22 20:07:03.522651 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:07:03.522659 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:07:03.522668 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:07:03.522677 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:07:03.522685 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:07:03.522711 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:07:03.522720 | orchestrator | 2025-06-22 20:07:03.522729 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-22 20:07:03.522738 | orchestrator | Sunday 22 June 2025 20:03:05 +0000 (0:00:01.287) 0:00:34.705 *********** 2025-06-22 20:07:03.522746 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.522755 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.522763 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.522772 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.522781 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.522789 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.522798 | orchestrator | 2025-06-22 20:07:03.522807 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-22 20:07:03.522815 | orchestrator | Sunday 22 June 2025 20:03:09 +0000 (0:00:03.333) 0:00:38.038 *********** 2025-06-22 20:07:03.522828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.522845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.522861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.522871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.522888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.522898 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.522912 | orchestrator | 2025-06-22 20:07:03.522922 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-22 20:07:03.522931 | orchestrator | Sunday 22 June 2025 20:03:11 +0000 (0:00:02.519) 0:00:40.557 *********** 2025-06-22 20:07:03.522939 | orchestrator | [WARNING]: Skipped 2025-06-22 20:07:03.522948 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-22 20:07:03.522957 | orchestrator | due to this access issue: 2025-06-22 20:07:03.522966 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-22 20:07:03.522974 | orchestrator | a directory 2025-06-22 20:07:03.522983 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:07:03.522992 | orchestrator | 2025-06-22 20:07:03.523001 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 20:07:03.523009 | orchestrator | Sunday 22 June 2025 20:03:12 +0000 (0:00:00.904) 0:00:41.462 *********** 2025-06-22 20:07:03.523018 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:07:03.523028 | orchestrator | 2025-06-22 20:07:03.523076 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-22 20:07:03.523092 | orchestrator | Sunday 22 June 2025 20:03:13 +0000 (0:00:01.013) 0:00:42.476 *********** 2025-06-22 20:07:03.523108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.523133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.523149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.523175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.523202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.523221 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.523237 | orchestrator | 2025-06-22 20:07:03.523252 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-22 20:07:03.523269 | orchestrator | Sunday 22 June 2025 20:03:17 +0000 (0:00:03.959) 0:00:46.435 *********** 2025-06-22 20:07:03.523294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.523305 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.523314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.523332 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.523341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.523350 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.523363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.523372 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.523381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.523390 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.523405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.523425 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.523434 | orchestrator | 2025-06-22 20:07:03.523443 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-22 20:07:03.523452 | orchestrator | Sunday 22 June 2025 20:03:20 +0000 (0:00:02.517) 0:00:48.953 *********** 2025-06-22 20:07:03.523461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.523471 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.523483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.523492 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.523501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.523511 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.523525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.523540 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.523549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.523558 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.523567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.523576 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.523585 | orchestrator | 2025-06-22 20:07:03.523594 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-22 20:07:03.523603 | orchestrator | Sunday 22 June 2025 20:03:22 +0000 (0:00:02.940) 0:00:51.893 *********** 2025-06-22 20:07:03.523611 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.523620 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.523629 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.523638 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.523647 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.523655 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.523664 | orchestrator | 2025-06-22 20:07:03.523676 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-22 20:07:03.523685 | orchestrator | Sunday 22 June 2025 20:03:25 +0000 (0:00:02.447) 0:00:54.341 *********** 2025-06-22 20:07:03.523694 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.523702 | orchestrator | 2025-06-22 20:07:03.523711 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-22 20:07:03.523720 | orchestrator | Sunday 22 June 2025 20:03:25 +0000 (0:00:00.169) 0:00:54.510 *********** 2025-06-22 20:07:03.523728 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.523737 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.523745 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.523755 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.523763 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.523772 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.523781 | orchestrator | 2025-06-22 20:07:03.523789 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-22 20:07:03.523798 | orchestrator | Sunday 22 June 2025 20:03:26 +0000 (0:00:00.610) 0:00:55.121 *********** 2025-06-22 20:07:03.523807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.523822 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.523836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.523846 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.523855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.523864 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.523877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.523886 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.523895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.523909 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.523922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.523932 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.523941 | orchestrator | 2025-06-22 20:07:03.523950 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-22 20:07:03.523959 | orchestrator | Sunday 22 June 2025 20:03:29 +0000 (0:00:02.906) 0:00:58.027 *********** 2025-06-22 20:07:03.523968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.523977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.523990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.524005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.524021 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.524031 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.524090 | orchestrator | 2025-06-22 20:07:03.524100 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-22 20:07:03.524109 | orchestrator | Sunday 22 June 2025 20:03:33 +0000 (0:00:04.048) 0:01:02.076 *********** 2025-06-22 20:07:03.524118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.524131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.524146 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.524162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.524171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.524181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.524189 | orchestrator | 2025-06-22 20:07:03.524198 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-22 20:07:03.524211 | orchestrator | Sunday 22 June 2025 20:03:40 +0000 (0:00:06.965) 0:01:09.042 *********** 2025-06-22 20:07:03.524225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.524234 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.524244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.524253 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.524271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.524289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.524304 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.524325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.524350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.524365 | orchestrator | 2025-06-22 20:07:03.524379 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-22 20:07:03.524394 | orchestrator | Sunday 22 June 2025 20:03:43 +0000 (0:00:03.688) 0:01:12.730 *********** 2025-06-22 20:07:03.524409 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.524424 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.524439 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:03.524454 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:03.524468 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:03.524483 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.524497 | orchestrator | 2025-06-22 20:07:03.524512 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-22 20:07:03.524526 | orchestrator | Sunday 22 June 2025 20:03:46 +0000 (0:00:03.132) 0:01:15.863 *********** 2025-06-22 20:07:03.524548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.524557 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.524566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.524575 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.524583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.524601 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.524614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.524627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.524636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.524645 | orchestrator | 2025-06-22 20:07:03.524653 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-22 20:07:03.524661 | orchestrator | Sunday 22 June 2025 20:03:50 +0000 (0:00:03.541) 0:01:19.404 *********** 2025-06-22 20:07:03.524669 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.524677 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.524685 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.524693 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.524705 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.524714 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.524721 | orchestrator | 2025-06-22 20:07:03.524730 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-22 20:07:03.524737 | orchestrator | Sunday 22 June 2025 20:03:52 +0000 (0:00:02.158) 0:01:21.563 *********** 2025-06-22 20:07:03.524745 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.524753 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.524761 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.524769 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.524777 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.524784 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.524792 | orchestrator | 2025-06-22 20:07:03.524800 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-22 20:07:03.524808 | orchestrator | Sunday 22 June 2025 20:03:55 +0000 (0:00:02.481) 0:01:24.044 *********** 2025-06-22 20:07:03.524816 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.524824 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.524832 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.524840 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.524848 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.524856 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.524864 | orchestrator | 2025-06-22 20:07:03.524871 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-22 20:07:03.524879 | orchestrator | Sunday 22 June 2025 20:03:57 +0000 (0:00:02.406) 0:01:26.451 *********** 2025-06-22 20:07:03.524887 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.524899 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.524907 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.524915 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.524923 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.524930 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.524938 | orchestrator | 2025-06-22 20:07:03.524946 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-22 20:07:03.524954 | orchestrator | Sunday 22 June 2025 20:03:59 +0000 (0:00:01.939) 0:01:28.390 *********** 2025-06-22 20:07:03.524962 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.524970 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.524978 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.524986 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.524994 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.525001 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.525009 | orchestrator | 2025-06-22 20:07:03.525017 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-22 20:07:03.525025 | orchestrator | Sunday 22 June 2025 20:04:01 +0000 (0:00:01.991) 0:01:30.382 *********** 2025-06-22 20:07:03.525049 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.525057 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.525065 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.525073 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.525080 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.525088 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.525096 | orchestrator | 2025-06-22 20:07:03.525104 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-22 20:07:03.525112 | orchestrator | Sunday 22 June 2025 20:04:03 +0000 (0:00:01.883) 0:01:32.265 *********** 2025-06-22 20:07:03.525120 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:07:03.525128 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.525136 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:07:03.525143 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.525151 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:07:03.525164 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.525172 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:07:03.525180 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.525188 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:07:03.525201 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.525210 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:07:03.525217 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.525225 | orchestrator | 2025-06-22 20:07:03.525233 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-22 20:07:03.525241 | orchestrator | Sunday 22 June 2025 20:04:05 +0000 (0:00:01.905) 0:01:34.171 *********** 2025-06-22 20:07:03.525249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.525258 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.525266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.525274 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.525286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.525295 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.525303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.525316 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.525329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.525337 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.525346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.525354 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.525362 | orchestrator | 2025-06-22 20:07:03.525370 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-22 20:07:03.525377 | orchestrator | Sunday 22 June 2025 20:04:07 +0000 (0:00:01.952) 0:01:36.124 *********** 2025-06-22 20:07:03.525389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.525398 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.525407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.525419 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.525432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.525440 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.525449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.525457 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.525465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.525473 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.525485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.525500 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.525509 | orchestrator | 2025-06-22 20:07:03.525517 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-22 20:07:03.525525 | orchestrator | Sunday 22 June 2025 20:04:09 +0000 (0:00:02.617) 0:01:38.741 *********** 2025-06-22 20:07:03.525533 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.525541 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.525549 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.525556 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.525564 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.525572 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.525580 | orchestrator | 2025-06-22 20:07:03.525588 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-22 20:07:03.525596 | orchestrator | Sunday 22 June 2025 20:04:11 +0000 (0:00:02.156) 0:01:40.897 *********** 2025-06-22 20:07:03.525604 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.525612 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.525620 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.525628 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:07:03.525636 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:07:03.525643 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:07:03.525651 | orchestrator | 2025-06-22 20:07:03.525659 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-22 20:07:03.525667 | orchestrator | Sunday 22 June 2025 20:04:15 +0000 (0:00:03.487) 0:01:44.385 *********** 2025-06-22 20:07:03.525675 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.525683 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.525690 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.525698 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.525706 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.525714 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.525722 | orchestrator | 2025-06-22 20:07:03.525730 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-22 20:07:03.525799 | orchestrator | Sunday 22 June 2025 20:04:18 +0000 (0:00:03.421) 0:01:47.806 *********** 2025-06-22 20:07:03.525809 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.525817 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.525825 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.525833 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.525841 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.525849 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.525857 | orchestrator | 2025-06-22 20:07:03.525865 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-22 20:07:03.525873 | orchestrator | Sunday 22 June 2025 20:04:21 +0000 (0:00:02.178) 0:01:49.985 *********** 2025-06-22 20:07:03.525881 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.525888 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.525896 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.525904 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.525912 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.525919 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.525927 | orchestrator | 2025-06-22 20:07:03.525935 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-22 20:07:03.525943 | orchestrator | Sunday 22 June 2025 20:04:23 +0000 (0:00:02.275) 0:01:52.260 *********** 2025-06-22 20:07:03.525951 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.525959 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.525966 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.525974 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.525982 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.525990 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.525998 | orchestrator | 2025-06-22 20:07:03.526005 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-22 20:07:03.526096 | orchestrator | Sunday 22 June 2025 20:04:25 +0000 (0:00:02.260) 0:01:54.520 *********** 2025-06-22 20:07:03.526109 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.526118 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.526125 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.526133 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.526141 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.526149 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.526157 | orchestrator | 2025-06-22 20:07:03.526165 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-22 20:07:03.526173 | orchestrator | Sunday 22 June 2025 20:04:28 +0000 (0:00:02.820) 0:01:57.341 *********** 2025-06-22 20:07:03.526181 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.526188 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.526196 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.526204 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.526212 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.526220 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.526227 | orchestrator | 2025-06-22 20:07:03.526235 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-22 20:07:03.526243 | orchestrator | Sunday 22 June 2025 20:04:31 +0000 (0:00:03.365) 0:02:00.706 *********** 2025-06-22 20:07:03.526251 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.526259 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.526267 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.526274 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.526282 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.526290 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.526298 | orchestrator | 2025-06-22 20:07:03.526305 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-22 20:07:03.526318 | orchestrator | Sunday 22 June 2025 20:04:34 +0000 (0:00:02.398) 0:02:03.105 *********** 2025-06-22 20:07:03.526326 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.526334 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.526342 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.526350 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.526358 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.526365 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.526373 | orchestrator | 2025-06-22 20:07:03.526381 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-22 20:07:03.526389 | orchestrator | Sunday 22 June 2025 20:04:36 +0000 (0:00:02.137) 0:02:05.242 *********** 2025-06-22 20:07:03.526397 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:07:03.526405 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.526413 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:07:03.526420 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.526427 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:07:03.526433 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.526440 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:07:03.526447 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.526453 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:07:03.526460 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.526466 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:07:03.526473 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.526480 | orchestrator | 2025-06-22 20:07:03.526486 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-22 20:07:03.526497 | orchestrator | Sunday 22 June 2025 20:04:39 +0000 (0:00:02.905) 0:02:08.148 *********** 2025-06-22 20:07:03.526511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.526519 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.526526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.526533 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.526542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:07:03.526550 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.526556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.526563 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.526573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.526588 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.526595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:07:03.526602 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.526608 | orchestrator | 2025-06-22 20:07:03.526615 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-22 20:07:03.526622 | orchestrator | Sunday 22 June 2025 20:04:41 +0000 (0:00:01.907) 0:02:10.055 *********** 2025-06-22 20:07:03.526629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.526639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.526647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:07:03.526663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.526670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.526678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:07:03.526685 | orchestrator | 2025-06-22 20:07:03.526803 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 20:07:03.526811 | orchestrator | Sunday 22 June 2025 20:04:44 +0000 (0:00:02.851) 0:02:12.906 *********** 2025-06-22 20:07:03.526818 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:03.526825 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:03.526832 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:03.526838 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:07:03.526845 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:07:03.526852 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:07:03.526859 | orchestrator | 2025-06-22 20:07:03.526870 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-22 20:07:03.526877 | orchestrator | Sunday 22 June 2025 20:04:44 +0000 (0:00:00.441) 0:02:13.348 *********** 2025-06-22 20:07:03.526884 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:03.526893 | orchestrator | 2025-06-22 20:07:03.526900 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-22 20:07:03.526912 | orchestrator | Sunday 22 June 2025 20:04:46 +0000 (0:00:02.223) 0:02:15.571 *********** 2025-06-22 20:07:03.526919 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:03.526925 | orchestrator | 2025-06-22 20:07:03.526932 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-22 20:07:03.526939 | orchestrator | Sunday 22 June 2025 20:04:48 +0000 (0:00:02.252) 0:02:17.824 *********** 2025-06-22 20:07:03.526946 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:03.526952 | orchestrator | 2025-06-22 20:07:03.526959 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:07:03.526966 | orchestrator | Sunday 22 June 2025 20:05:29 +0000 (0:00:40.492) 0:02:58.316 *********** 2025-06-22 20:07:03.526972 | orchestrator | 2025-06-22 20:07:03.526979 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:07:03.526986 | orchestrator | Sunday 22 June 2025 20:05:29 +0000 (0:00:00.072) 0:02:58.389 *********** 2025-06-22 20:07:03.526992 | orchestrator | 2025-06-22 20:07:03.526999 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:07:03.527006 | orchestrator | Sunday 22 June 2025 20:05:29 +0000 (0:00:00.369) 0:02:58.759 *********** 2025-06-22 20:07:03.527013 | orchestrator | 2025-06-22 20:07:03.527019 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:07:03.527026 | orchestrator | Sunday 22 June 2025 20:05:29 +0000 (0:00:00.076) 0:02:58.836 *********** 2025-06-22 20:07:03.527046 | orchestrator | 2025-06-22 20:07:03.527054 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:07:03.527060 | orchestrator | Sunday 22 June 2025 20:05:30 +0000 (0:00:00.083) 0:02:58.920 *********** 2025-06-22 20:07:03.527067 | orchestrator | 2025-06-22 20:07:03.527074 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:07:03.527080 | orchestrator | Sunday 22 June 2025 20:05:30 +0000 (0:00:00.076) 0:02:58.996 *********** 2025-06-22 20:07:03.527087 | orchestrator | 2025-06-22 20:07:03.527094 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-22 20:07:03.527100 | orchestrator | Sunday 22 June 2025 20:05:30 +0000 (0:00:00.067) 0:02:59.063 *********** 2025-06-22 20:07:03.527112 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:03.527119 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:03.527126 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:03.527132 | orchestrator | 2025-06-22 20:07:03.527139 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-22 20:07:03.527146 | orchestrator | Sunday 22 June 2025 20:05:57 +0000 (0:00:27.515) 0:03:26.579 *********** 2025-06-22 20:07:03.527152 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:07:03.527159 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:07:03.527166 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:07:03.527173 | orchestrator | 2025-06-22 20:07:03.527180 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:07:03.527187 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 20:07:03.527194 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-22 20:07:03.527201 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-22 20:07:03.527208 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-22 20:07:03.527214 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-22 20:07:03.527221 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-22 20:07:03.527232 | orchestrator | 2025-06-22 20:07:03.527239 | orchestrator | 2025-06-22 20:07:03.527246 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:07:03.527252 | orchestrator | Sunday 22 June 2025 20:07:00 +0000 (0:01:02.875) 0:04:29.454 *********** 2025-06-22 20:07:03.527259 | orchestrator | =============================================================================== 2025-06-22 20:07:03.527266 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 62.88s 2025-06-22 20:07:03.527272 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.49s 2025-06-22 20:07:03.527279 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.52s 2025-06-22 20:07:03.527286 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.59s 2025-06-22 20:07:03.527292 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.97s 2025-06-22 20:07:03.527299 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.45s 2025-06-22 20:07:03.527306 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.05s 2025-06-22 20:07:03.527312 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.96s 2025-06-22 20:07:03.527322 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.69s 2025-06-22 20:07:03.527329 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.54s 2025-06-22 20:07:03.527336 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.49s 2025-06-22 20:07:03.527343 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 3.42s 2025-06-22 20:07:03.527349 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.37s 2025-06-22 20:07:03.527356 | orchestrator | Setting sysctl values --------------------------------------------------- 3.33s 2025-06-22 20:07:03.527363 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.27s 2025-06-22 20:07:03.527369 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.13s 2025-06-22 20:07:03.527376 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.08s 2025-06-22 20:07:03.527383 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.07s 2025-06-22 20:07:03.527389 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.94s 2025-06-22 20:07:03.527396 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.91s 2025-06-22 20:07:03.527403 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task 248e686d-8e16-4e92-9828-bb75489b2976 is in state SUCCESS 2025-06-22 20:07:03.527410 | orchestrator | 2025-06-22 20:07:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:06.560532 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:06.561518 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:06.561912 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task 680724de-1d98-4881-ad27-128eee71192d is in state STARTED 2025-06-22 20:07:06.562922 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:07:06.562950 | orchestrator | 2025-06-22 20:07:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:09.602367 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:09.602456 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:09.604294 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task 680724de-1d98-4881-ad27-128eee71192d is in state STARTED 2025-06-22 20:07:09.605994 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:07:09.606152 | orchestrator | 2025-06-22 20:07:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:12.648154 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:12.649439 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:12.649738 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task 680724de-1d98-4881-ad27-128eee71192d is in state STARTED 2025-06-22 20:07:12.651133 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:07:12.651157 | orchestrator | 2025-06-22 20:07:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:15.706728 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:15.709316 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:15.712695 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task 680724de-1d98-4881-ad27-128eee71192d is in state STARTED 2025-06-22 20:07:15.715074 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:07:15.715113 | orchestrator | 2025-06-22 20:07:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:18.772842 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:18.775573 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:18.779359 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task 680724de-1d98-4881-ad27-128eee71192d is in state STARTED 2025-06-22 20:07:18.784151 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state STARTED 2025-06-22 20:07:18.784186 | orchestrator | 2025-06-22 20:07:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:21.835576 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:21.835694 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:21.836530 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task 680724de-1d98-4881-ad27-128eee71192d is in state STARTED 2025-06-22 20:07:21.838732 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task 63a149fc-dabe-444f-a8b7-edacda37694f is in state SUCCESS 2025-06-22 20:07:21.840226 | orchestrator | 2025-06-22 20:07:21.840271 | orchestrator | 2025-06-22 20:07:21.840291 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:07:21.840313 | orchestrator | 2025-06-22 20:07:21.840738 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:07:21.840773 | orchestrator | Sunday 22 June 2025 20:05:29 +0000 (0:00:00.263) 0:00:00.263 *********** 2025-06-22 20:07:21.840795 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:07:21.840817 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:07:21.840838 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:07:21.840880 | orchestrator | 2025-06-22 20:07:21.840903 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:07:21.840924 | orchestrator | Sunday 22 June 2025 20:05:29 +0000 (0:00:00.311) 0:00:00.575 *********** 2025-06-22 20:07:21.840945 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-22 20:07:21.840966 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-22 20:07:21.840984 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-22 20:07:21.841017 | orchestrator | 2025-06-22 20:07:21.841028 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-22 20:07:21.841069 | orchestrator | 2025-06-22 20:07:21.841081 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-22 20:07:21.841092 | orchestrator | Sunday 22 June 2025 20:05:30 +0000 (0:00:00.430) 0:00:01.006 *********** 2025-06-22 20:07:21.841103 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:07:21.841114 | orchestrator | 2025-06-22 20:07:21.841125 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-22 20:07:21.841136 | orchestrator | Sunday 22 June 2025 20:05:31 +0000 (0:00:01.007) 0:00:02.014 *********** 2025-06-22 20:07:21.841148 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-22 20:07:21.841171 | orchestrator | 2025-06-22 20:07:21.841183 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-22 20:07:21.841217 | orchestrator | Sunday 22 June 2025 20:05:35 +0000 (0:00:03.701) 0:00:05.716 *********** 2025-06-22 20:07:21.841238 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-22 20:07:21.841250 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-22 20:07:21.841260 | orchestrator | 2025-06-22 20:07:21.841271 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-22 20:07:21.841281 | orchestrator | Sunday 22 June 2025 20:05:41 +0000 (0:00:06.421) 0:00:12.138 *********** 2025-06-22 20:07:21.841292 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:07:21.841303 | orchestrator | 2025-06-22 20:07:21.841314 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-22 20:07:21.841325 | orchestrator | Sunday 22 June 2025 20:05:44 +0000 (0:00:02.809) 0:00:14.947 *********** 2025-06-22 20:07:21.841338 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:07:21.841350 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-22 20:07:21.841362 | orchestrator | 2025-06-22 20:07:21.841374 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-22 20:07:21.841387 | orchestrator | Sunday 22 June 2025 20:05:47 +0000 (0:00:03.376) 0:00:18.324 *********** 2025-06-22 20:07:21.841399 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:07:21.841411 | orchestrator | 2025-06-22 20:07:21.841423 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-22 20:07:21.841435 | orchestrator | Sunday 22 June 2025 20:05:50 +0000 (0:00:03.177) 0:00:21.501 *********** 2025-06-22 20:07:21.841448 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-22 20:07:21.841460 | orchestrator | 2025-06-22 20:07:21.841473 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-22 20:07:21.841485 | orchestrator | Sunday 22 June 2025 20:05:54 +0000 (0:00:03.442) 0:00:24.944 *********** 2025-06-22 20:07:21.841497 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:21.841510 | orchestrator | 2025-06-22 20:07:21.841522 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-22 20:07:21.841534 | orchestrator | Sunday 22 June 2025 20:05:57 +0000 (0:00:02.855) 0:00:27.799 *********** 2025-06-22 20:07:21.841547 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:21.841559 | orchestrator | 2025-06-22 20:07:21.841571 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-22 20:07:21.841583 | orchestrator | Sunday 22 June 2025 20:06:00 +0000 (0:00:03.350) 0:00:31.150 *********** 2025-06-22 20:07:21.841595 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:21.841607 | orchestrator | 2025-06-22 20:07:21.841619 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-22 20:07:21.841631 | orchestrator | Sunday 22 June 2025 20:06:03 +0000 (0:00:03.149) 0:00:34.299 *********** 2025-06-22 20:07:21.841686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.841708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.841728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.841750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.841770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.841811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.841825 | orchestrator | 2025-06-22 20:07:21.841836 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-22 20:07:21.841847 | orchestrator | Sunday 22 June 2025 20:06:05 +0000 (0:00:01.759) 0:00:36.058 *********** 2025-06-22 20:07:21.841858 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:21.841868 | orchestrator | 2025-06-22 20:07:21.841879 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-22 20:07:21.841894 | orchestrator | Sunday 22 June 2025 20:06:05 +0000 (0:00:00.180) 0:00:36.239 *********** 2025-06-22 20:07:21.841914 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:21.841933 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:21.841953 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:21.841994 | orchestrator | 2025-06-22 20:07:21.842015 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-22 20:07:21.842198 | orchestrator | Sunday 22 June 2025 20:06:06 +0000 (0:00:00.667) 0:00:36.907 *********** 2025-06-22 20:07:21.842213 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:07:21.842224 | orchestrator | 2025-06-22 20:07:21.842235 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-22 20:07:21.842246 | orchestrator | Sunday 22 June 2025 20:06:06 +0000 (0:00:00.753) 0:00:37.661 *********** 2025-06-22 20:07:21.842258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.842270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.842311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.842334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.842347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.842358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.842370 | orchestrator | 2025-06-22 20:07:21.842385 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-22 20:07:21.842399 | orchestrator | Sunday 22 June 2025 20:06:09 +0000 (0:00:02.818) 0:00:40.479 *********** 2025-06-22 20:07:21.842410 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:07:21.842421 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:07:21.842432 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:07:21.842447 | orchestrator | 2025-06-22 20:07:21.842467 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-22 20:07:21.842486 | orchestrator | Sunday 22 June 2025 20:06:10 +0000 (0:00:00.375) 0:00:40.855 *********** 2025-06-22 20:07:21.842508 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:07:21.842523 | orchestrator | 2025-06-22 20:07:21.842534 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-22 20:07:21.842545 | orchestrator | Sunday 22 June 2025 20:06:11 +0000 (0:00:01.181) 0:00:42.037 *********** 2025-06-22 20:07:21.842556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.842610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.842636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.842658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.842691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.842714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.842730 | orchestrator | 2025-06-22 20:07:21.842927 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-22 20:07:21.842964 | orchestrator | Sunday 22 June 2025 20:06:14 +0000 (0:00:02.963) 0:00:45.000 *********** 2025-06-22 20:07:21.843122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:07:21.843153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:07:21.843173 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:21.843222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:07:21.843256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:07:21.843276 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:21.843303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:07:21.843334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:07:21.843356 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:21.843371 | orchestrator | 2025-06-22 20:07:21.843390 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-22 20:07:21.843410 | orchestrator | Sunday 22 June 2025 20:06:14 +0000 (0:00:00.678) 0:00:45.679 *********** 2025-06-22 20:07:21.843431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:07:21.843471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:07:21.843492 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:21.843512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:07:21.843556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:07:21.843614 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:21.843652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:07:21.843672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:07:21.843700 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:21.843718 | orchestrator | 2025-06-22 20:07:21.843737 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-22 20:07:21.843755 | orchestrator | Sunday 22 June 2025 20:06:16 +0000 (0:00:01.354) 0:00:47.033 *********** 2025-06-22 20:07:21.843773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.843798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.843830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.843849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.843875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.843887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.843910 | orchestrator | 2025-06-22 20:07:21.843920 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-22 20:07:21.843930 | orchestrator | Sunday 22 June 2025 20:06:18 +0000 (0:00:02.338) 0:00:49.371 *********** 2025-06-22 20:07:21.843958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.843975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.843998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.844015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.844053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.844079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.844097 | orchestrator | 2025-06-22 20:07:21.844114 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-22 20:07:21.844131 | orchestrator | Sunday 22 June 2025 20:06:26 +0000 (0:00:08.179) 0:00:57.550 *********** 2025-06-22 20:07:21.844141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:07:21.844169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:07:21.844180 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:21.844201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:07:21.844212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:07:21.844221 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:21.844241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:07:21.844253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:07:21.844268 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:21.844278 | orchestrator | 2025-06-22 20:07:21.844288 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-22 20:07:21.844298 | orchestrator | Sunday 22 June 2025 20:06:28 +0000 (0:00:01.475) 0:00:59.025 *********** 2025-06-22 20:07:21.844308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.844318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.844328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:07:21.844344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.844360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.844370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:21.844380 | orchestrator | 2025-06-22 20:07:21.844390 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-22 20:07:21.844399 | orchestrator | Sunday 22 June 2025 20:06:30 +0000 (0:00:02.303) 0:01:01.328 *********** 2025-06-22 20:07:21.844409 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:21.844419 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:21.844428 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:21.844438 | orchestrator | 2025-06-22 20:07:21.844448 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-22 20:07:21.844457 | orchestrator | Sunday 22 June 2025 20:06:30 +0000 (0:00:00.278) 0:01:01.607 *********** 2025-06-22 20:07:21.844467 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:21.844476 | orchestrator | 2025-06-22 20:07:21.844513 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-22 20:07:21.844523 | orchestrator | Sunday 22 June 2025 20:06:32 +0000 (0:00:01.760) 0:01:03.368 *********** 2025-06-22 20:07:21.844533 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:21.844543 | orchestrator | 2025-06-22 20:07:21.844552 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-22 20:07:21.844562 | orchestrator | Sunday 22 June 2025 20:06:34 +0000 (0:00:01.800) 0:01:05.168 *********** 2025-06-22 20:07:21.844571 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:21.844581 | orchestrator | 2025-06-22 20:07:21.844590 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-22 20:07:21.844600 | orchestrator | Sunday 22 June 2025 20:06:47 +0000 (0:00:13.082) 0:01:18.250 *********** 2025-06-22 20:07:21.844609 | orchestrator | 2025-06-22 20:07:21.844619 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-22 20:07:21.844628 | orchestrator | Sunday 22 June 2025 20:06:47 +0000 (0:00:00.062) 0:01:18.313 *********** 2025-06-22 20:07:21.844638 | orchestrator | 2025-06-22 20:07:21.844647 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-22 20:07:21.844657 | orchestrator | Sunday 22 June 2025 20:06:47 +0000 (0:00:00.060) 0:01:18.373 *********** 2025-06-22 20:07:21.844672 | orchestrator | 2025-06-22 20:07:21.844682 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-22 20:07:21.844691 | orchestrator | Sunday 22 June 2025 20:06:47 +0000 (0:00:00.061) 0:01:18.434 *********** 2025-06-22 20:07:21.844705 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:21.844715 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:21.844724 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:21.844733 | orchestrator | 2025-06-22 20:07:21.844743 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-22 20:07:21.844753 | orchestrator | Sunday 22 June 2025 20:07:05 +0000 (0:00:17.314) 0:01:35.748 *********** 2025-06-22 20:07:21.844762 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:21.844772 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:21.844781 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:21.844791 | orchestrator | 2025-06-22 20:07:21.844805 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:07:21.844816 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:07:21.844826 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:07:21.844836 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:07:21.844852 | orchestrator | 2025-06-22 20:07:21.844862 | orchestrator | 2025-06-22 20:07:21.844872 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:07:21.844882 | orchestrator | Sunday 22 June 2025 20:07:19 +0000 (0:00:14.748) 0:01:50.497 *********** 2025-06-22 20:07:21.844891 | orchestrator | =============================================================================== 2025-06-22 20:07:21.844901 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.31s 2025-06-22 20:07:21.844910 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.75s 2025-06-22 20:07:21.844919 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 13.08s 2025-06-22 20:07:21.844933 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 8.18s 2025-06-22 20:07:21.844950 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.42s 2025-06-22 20:07:21.844966 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.70s 2025-06-22 20:07:21.844977 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.44s 2025-06-22 20:07:21.844986 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.38s 2025-06-22 20:07:21.844996 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.35s 2025-06-22 20:07:21.845011 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.18s 2025-06-22 20:07:21.845029 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.15s 2025-06-22 20:07:21.845067 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.96s 2025-06-22 20:07:21.845086 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.86s 2025-06-22 20:07:21.845105 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.82s 2025-06-22 20:07:21.845122 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.81s 2025-06-22 20:07:21.845139 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.34s 2025-06-22 20:07:21.845157 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.30s 2025-06-22 20:07:21.845189 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 1.80s 2025-06-22 20:07:21.845206 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.76s 2025-06-22 20:07:21.845248 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.76s 2025-06-22 20:07:21.845267 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:21.845285 | orchestrator | 2025-06-22 20:07:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:24.886658 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:24.887191 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:24.888664 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task 680724de-1d98-4881-ad27-128eee71192d is in state STARTED 2025-06-22 20:07:24.889697 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:24.889719 | orchestrator | 2025-06-22 20:07:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:27.943999 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:27.946483 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:27.948113 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task 680724de-1d98-4881-ad27-128eee71192d is in state STARTED 2025-06-22 20:07:27.949605 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:27.949844 | orchestrator | 2025-06-22 20:07:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:30.994316 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:30.996310 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:30.997794 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task 680724de-1d98-4881-ad27-128eee71192d is in state STARTED 2025-06-22 20:07:30.999300 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:30.999348 | orchestrator | 2025-06-22 20:07:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:34.053775 | orchestrator | 2025-06-22 20:07:34 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:34.056284 | orchestrator | 2025-06-22 20:07:34 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:34.057484 | orchestrator | 2025-06-22 20:07:34 | INFO  | Task 680724de-1d98-4881-ad27-128eee71192d is in state SUCCESS 2025-06-22 20:07:34.059099 | orchestrator | 2025-06-22 20:07:34 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:34.059143 | orchestrator | 2025-06-22 20:07:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:37.114609 | orchestrator | 2025-06-22 20:07:37 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:37.117796 | orchestrator | 2025-06-22 20:07:37 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:37.120717 | orchestrator | 2025-06-22 20:07:37 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:07:37.122500 | orchestrator | 2025-06-22 20:07:37 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:37.122751 | orchestrator | 2025-06-22 20:07:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:40.163352 | orchestrator | 2025-06-22 20:07:40 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:40.163456 | orchestrator | 2025-06-22 20:07:40 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:40.163694 | orchestrator | 2025-06-22 20:07:40 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:07:40.164250 | orchestrator | 2025-06-22 20:07:40 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:40.165200 | orchestrator | 2025-06-22 20:07:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:43.209507 | orchestrator | 2025-06-22 20:07:43 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:43.210176 | orchestrator | 2025-06-22 20:07:43 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:43.212136 | orchestrator | 2025-06-22 20:07:43 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:07:43.213742 | orchestrator | 2025-06-22 20:07:43 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:43.213765 | orchestrator | 2025-06-22 20:07:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:46.255606 | orchestrator | 2025-06-22 20:07:46 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:46.258219 | orchestrator | 2025-06-22 20:07:46 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:46.258243 | orchestrator | 2025-06-22 20:07:46 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:07:46.258827 | orchestrator | 2025-06-22 20:07:46 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:46.259231 | orchestrator | 2025-06-22 20:07:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:49.292069 | orchestrator | 2025-06-22 20:07:49 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:49.292513 | orchestrator | 2025-06-22 20:07:49 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:49.293609 | orchestrator | 2025-06-22 20:07:49 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:07:49.295662 | orchestrator | 2025-06-22 20:07:49 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:49.295710 | orchestrator | 2025-06-22 20:07:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:52.333975 | orchestrator | 2025-06-22 20:07:52 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:52.334125 | orchestrator | 2025-06-22 20:07:52 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:52.334675 | orchestrator | 2025-06-22 20:07:52 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:07:52.335378 | orchestrator | 2025-06-22 20:07:52 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:52.335399 | orchestrator | 2025-06-22 20:07:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:55.383858 | orchestrator | 2025-06-22 20:07:55 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:55.385220 | orchestrator | 2025-06-22 20:07:55 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:55.388240 | orchestrator | 2025-06-22 20:07:55 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:07:55.389126 | orchestrator | 2025-06-22 20:07:55 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:55.389183 | orchestrator | 2025-06-22 20:07:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:58.426741 | orchestrator | 2025-06-22 20:07:58 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:07:58.431145 | orchestrator | 2025-06-22 20:07:58 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:07:58.432740 | orchestrator | 2025-06-22 20:07:58 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:07:58.434823 | orchestrator | 2025-06-22 20:07:58 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:07:58.434964 | orchestrator | 2025-06-22 20:07:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:01.458423 | orchestrator | 2025-06-22 20:08:01 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:01.462001 | orchestrator | 2025-06-22 20:08:01 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:01.464267 | orchestrator | 2025-06-22 20:08:01 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:01.467230 | orchestrator | 2025-06-22 20:08:01 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:01.467820 | orchestrator | 2025-06-22 20:08:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:04.505380 | orchestrator | 2025-06-22 20:08:04 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:04.507991 | orchestrator | 2025-06-22 20:08:04 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:04.510375 | orchestrator | 2025-06-22 20:08:04 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:04.512407 | orchestrator | 2025-06-22 20:08:04 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:04.512729 | orchestrator | 2025-06-22 20:08:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:07.548733 | orchestrator | 2025-06-22 20:08:07 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:07.549185 | orchestrator | 2025-06-22 20:08:07 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:07.549820 | orchestrator | 2025-06-22 20:08:07 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:07.552409 | orchestrator | 2025-06-22 20:08:07 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:07.552494 | orchestrator | 2025-06-22 20:08:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:10.577969 | orchestrator | 2025-06-22 20:08:10 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:10.578870 | orchestrator | 2025-06-22 20:08:10 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:10.580249 | orchestrator | 2025-06-22 20:08:10 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:10.581360 | orchestrator | 2025-06-22 20:08:10 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:10.581568 | orchestrator | 2025-06-22 20:08:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:13.647963 | orchestrator | 2025-06-22 20:08:13 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:13.648363 | orchestrator | 2025-06-22 20:08:13 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:13.649105 | orchestrator | 2025-06-22 20:08:13 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:13.650882 | orchestrator | 2025-06-22 20:08:13 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:13.650908 | orchestrator | 2025-06-22 20:08:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:16.684320 | orchestrator | 2025-06-22 20:08:16 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:16.686258 | orchestrator | 2025-06-22 20:08:16 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:16.687315 | orchestrator | 2025-06-22 20:08:16 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:16.690247 | orchestrator | 2025-06-22 20:08:16 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:16.690294 | orchestrator | 2025-06-22 20:08:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:19.726221 | orchestrator | 2025-06-22 20:08:19 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:19.726670 | orchestrator | 2025-06-22 20:08:19 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:19.728185 | orchestrator | 2025-06-22 20:08:19 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:19.729167 | orchestrator | 2025-06-22 20:08:19 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:19.729191 | orchestrator | 2025-06-22 20:08:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:22.770802 | orchestrator | 2025-06-22 20:08:22 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:22.772511 | orchestrator | 2025-06-22 20:08:22 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:22.773696 | orchestrator | 2025-06-22 20:08:22 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:22.774449 | orchestrator | 2025-06-22 20:08:22 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:22.774489 | orchestrator | 2025-06-22 20:08:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:25.815892 | orchestrator | 2025-06-22 20:08:25 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:25.816818 | orchestrator | 2025-06-22 20:08:25 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:25.818982 | orchestrator | 2025-06-22 20:08:25 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:25.820838 | orchestrator | 2025-06-22 20:08:25 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:25.820878 | orchestrator | 2025-06-22 20:08:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:28.857129 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:28.857239 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:28.857889 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:28.858644 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:28.859447 | orchestrator | 2025-06-22 20:08:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:31.902489 | orchestrator | 2025-06-22 20:08:31 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:31.903250 | orchestrator | 2025-06-22 20:08:31 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:31.904714 | orchestrator | 2025-06-22 20:08:31 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:31.905210 | orchestrator | 2025-06-22 20:08:31 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:31.905229 | orchestrator | 2025-06-22 20:08:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:34.947586 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:34.947939 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:34.949506 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:34.950657 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:34.950691 | orchestrator | 2025-06-22 20:08:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:37.985541 | orchestrator | 2025-06-22 20:08:37 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:37.988380 | orchestrator | 2025-06-22 20:08:37 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:37.989131 | orchestrator | 2025-06-22 20:08:37 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:37.989551 | orchestrator | 2025-06-22 20:08:37 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:37.989916 | orchestrator | 2025-06-22 20:08:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:41.020543 | orchestrator | 2025-06-22 20:08:41 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:41.024727 | orchestrator | 2025-06-22 20:08:41 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:41.026951 | orchestrator | 2025-06-22 20:08:41 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:41.026999 | orchestrator | 2025-06-22 20:08:41 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:41.027018 | orchestrator | 2025-06-22 20:08:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:44.062774 | orchestrator | 2025-06-22 20:08:44 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:44.063260 | orchestrator | 2025-06-22 20:08:44 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:44.064791 | orchestrator | 2025-06-22 20:08:44 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:44.066246 | orchestrator | 2025-06-22 20:08:44 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:44.066522 | orchestrator | 2025-06-22 20:08:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:47.094273 | orchestrator | 2025-06-22 20:08:47 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:47.094358 | orchestrator | 2025-06-22 20:08:47 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:47.094974 | orchestrator | 2025-06-22 20:08:47 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:47.095407 | orchestrator | 2025-06-22 20:08:47 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:47.096339 | orchestrator | 2025-06-22 20:08:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:50.119651 | orchestrator | 2025-06-22 20:08:50 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:50.119900 | orchestrator | 2025-06-22 20:08:50 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state STARTED 2025-06-22 20:08:50.120920 | orchestrator | 2025-06-22 20:08:50 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:50.122538 | orchestrator | 2025-06-22 20:08:50 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:50.122609 | orchestrator | 2025-06-22 20:08:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:53.161838 | orchestrator | 2025-06-22 20:08:53 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:53.162257 | orchestrator | 2025-06-22 20:08:53 | INFO  | Task b349658c-193c-406b-a844-fa5ca63cf9d1 is in state SUCCESS 2025-06-22 20:08:53.162454 | orchestrator | 2025-06-22 20:08:53.162476 | orchestrator | 2025-06-22 20:08:53.162487 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:08:53.162499 | orchestrator | 2025-06-22 20:08:53.162510 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:08:53.162521 | orchestrator | Sunday 22 June 2025 20:07:04 +0000 (0:00:00.269) 0:00:00.269 *********** 2025-06-22 20:08:53.162533 | orchestrator | ok: [testbed-manager] 2025-06-22 20:08:53.162544 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:08:53.162555 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:08:53.162565 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:08:53.162576 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:08:53.162587 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:08:53.162597 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:08:53.162608 | orchestrator | 2025-06-22 20:08:53.162619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:08:53.162643 | orchestrator | Sunday 22 June 2025 20:07:06 +0000 (0:00:01.094) 0:00:01.364 *********** 2025-06-22 20:08:53.162655 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-22 20:08:53.162666 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-22 20:08:53.162677 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-22 20:08:53.162687 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-22 20:08:53.162698 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-22 20:08:53.162709 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-22 20:08:53.162719 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-22 20:08:53.162731 | orchestrator | 2025-06-22 20:08:53.162742 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-22 20:08:53.162789 | orchestrator | 2025-06-22 20:08:53.162800 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-22 20:08:53.162811 | orchestrator | Sunday 22 June 2025 20:07:07 +0000 (0:00:01.283) 0:00:02.647 *********** 2025-06-22 20:08:53.162823 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:08:53.162835 | orchestrator | 2025-06-22 20:08:53.162846 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-22 20:08:53.162856 | orchestrator | Sunday 22 June 2025 20:07:09 +0000 (0:00:01.754) 0:00:04.401 *********** 2025-06-22 20:08:53.162867 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-06-22 20:08:53.162878 | orchestrator | 2025-06-22 20:08:53.162889 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-22 20:08:53.162899 | orchestrator | Sunday 22 June 2025 20:07:12 +0000 (0:00:03.187) 0:00:07.589 *********** 2025-06-22 20:08:53.162911 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-22 20:08:53.162942 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-22 20:08:53.162954 | orchestrator | 2025-06-22 20:08:53.162965 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-22 20:08:53.162976 | orchestrator | Sunday 22 June 2025 20:07:17 +0000 (0:00:05.730) 0:00:13.320 *********** 2025-06-22 20:08:53.162987 | orchestrator | ok: [testbed-manager] => (item=service) 2025-06-22 20:08:53.162997 | orchestrator | 2025-06-22 20:08:53.163008 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-22 20:08:53.163019 | orchestrator | Sunday 22 June 2025 20:07:20 +0000 (0:00:02.839) 0:00:16.159 *********** 2025-06-22 20:08:53.163030 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:08:53.163076 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-06-22 20:08:53.163090 | orchestrator | 2025-06-22 20:08:53.163103 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-22 20:08:53.163115 | orchestrator | Sunday 22 June 2025 20:07:24 +0000 (0:00:03.194) 0:00:19.354 *********** 2025-06-22 20:08:53.163127 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-06-22 20:08:53.163140 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-06-22 20:08:53.163152 | orchestrator | 2025-06-22 20:08:53.163164 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-22 20:08:53.163177 | orchestrator | Sunday 22 June 2025 20:07:29 +0000 (0:00:05.443) 0:00:24.797 *********** 2025-06-22 20:08:53.163190 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-06-22 20:08:53.163202 | orchestrator | 2025-06-22 20:08:53.163214 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:08:53.163227 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:53.163240 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:53.163252 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:53.163264 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:53.163276 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:53.163301 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:53.163314 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:53.163327 | orchestrator | 2025-06-22 20:08:53.163386 | orchestrator | 2025-06-22 20:08:53.163400 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:08:53.163413 | orchestrator | Sunday 22 June 2025 20:07:33 +0000 (0:00:04.086) 0:00:28.884 *********** 2025-06-22 20:08:53.163426 | orchestrator | =============================================================================== 2025-06-22 20:08:53.163437 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.73s 2025-06-22 20:08:53.163455 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.44s 2025-06-22 20:08:53.163466 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.09s 2025-06-22 20:08:53.163477 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.19s 2025-06-22 20:08:53.163488 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.19s 2025-06-22 20:08:53.163508 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.84s 2025-06-22 20:08:53.163519 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.75s 2025-06-22 20:08:53.163530 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.28s 2025-06-22 20:08:53.163541 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.09s 2025-06-22 20:08:53.163552 | orchestrator | 2025-06-22 20:08:53.163563 | orchestrator | 2025-06-22 20:08:53.163574 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-22 20:08:53.163585 | orchestrator | 2025-06-22 20:08:53.163596 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-22 20:08:53.163607 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.122) 0:00:00.122 *********** 2025-06-22 20:08:53.163618 | orchestrator | changed: [localhost] 2025-06-22 20:08:53.163629 | orchestrator | 2025-06-22 20:08:53.163640 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-22 20:08:53.163651 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.833) 0:00:00.956 *********** 2025-06-22 20:08:53.163662 | orchestrator | 2025-06-22 20:08:53.163672 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 20:08:53.163683 | orchestrator | 2025-06-22 20:08:53.163694 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 20:08:53.163705 | orchestrator | 2025-06-22 20:08:53.163715 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 20:08:53.163726 | orchestrator | 2025-06-22 20:08:53.163737 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 20:08:53.163748 | orchestrator | 2025-06-22 20:08:53.163759 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 20:08:53.163769 | orchestrator | 2025-06-22 20:08:53.163780 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 20:08:53.163791 | orchestrator | 2025-06-22 20:08:53.163802 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 20:08:53.163813 | orchestrator | 2025-06-22 20:08:53.163823 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 20:08:53.163834 | orchestrator | changed: [localhost] 2025-06-22 20:08:53.163846 | orchestrator | 2025-06-22 20:08:53.163866 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-22 20:08:53.163884 | orchestrator | Sunday 22 June 2025 20:08:35 +0000 (0:06:03.691) 0:06:04.648 *********** 2025-06-22 20:08:53.163941 | orchestrator | changed: [localhost] 2025-06-22 20:08:53.163960 | orchestrator | 2025-06-22 20:08:53.163996 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:08:53.164008 | orchestrator | 2025-06-22 20:08:53.164020 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:08:53.164031 | orchestrator | Sunday 22 June 2025 20:08:48 +0000 (0:00:12.977) 0:06:17.625 *********** 2025-06-22 20:08:53.164077 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:08:53.164090 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:08:53.164101 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:08:53.164112 | orchestrator | 2025-06-22 20:08:53.164123 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:08:53.164134 | orchestrator | Sunday 22 June 2025 20:08:49 +0000 (0:00:00.524) 0:06:18.150 *********** 2025-06-22 20:08:53.164145 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-22 20:08:53.164156 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-22 20:08:53.164167 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-22 20:08:53.164177 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-22 20:08:53.164188 | orchestrator | 2025-06-22 20:08:53.164200 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-22 20:08:53.164211 | orchestrator | skipping: no hosts matched 2025-06-22 20:08:53.164230 | orchestrator | 2025-06-22 20:08:53.164241 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:08:53.164252 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:53.164263 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:53.164274 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:53.164285 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:53.164296 | orchestrator | 2025-06-22 20:08:53.164307 | orchestrator | 2025-06-22 20:08:53.164318 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:08:53.164337 | orchestrator | Sunday 22 June 2025 20:08:50 +0000 (0:00:00.927) 0:06:19.078 *********** 2025-06-22 20:08:53.164348 | orchestrator | =============================================================================== 2025-06-22 20:08:53.164359 | orchestrator | Download ironic-agent initramfs --------------------------------------- 363.69s 2025-06-22 20:08:53.164370 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.98s 2025-06-22 20:08:53.164381 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2025-06-22 20:08:53.164392 | orchestrator | Ensure the destination directory exists --------------------------------- 0.83s 2025-06-22 20:08:53.164403 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.52s 2025-06-22 20:08:53.164523 | orchestrator | 2025-06-22 20:08:53 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:08:53.164539 | orchestrator | 2025-06-22 20:08:53 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:53.164550 | orchestrator | 2025-06-22 20:08:53 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:53.164562 | orchestrator | 2025-06-22 20:08:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:56.193485 | orchestrator | 2025-06-22 20:08:56 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:56.193569 | orchestrator | 2025-06-22 20:08:56 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:08:56.194134 | orchestrator | 2025-06-22 20:08:56 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:56.194571 | orchestrator | 2025-06-22 20:08:56 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:56.194594 | orchestrator | 2025-06-22 20:08:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:59.252577 | orchestrator | 2025-06-22 20:08:59 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:08:59.252683 | orchestrator | 2025-06-22 20:08:59 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:08:59.252707 | orchestrator | 2025-06-22 20:08:59 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:08:59.252727 | orchestrator | 2025-06-22 20:08:59 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:08:59.252746 | orchestrator | 2025-06-22 20:08:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:02.266912 | orchestrator | 2025-06-22 20:09:02 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:09:02.268262 | orchestrator | 2025-06-22 20:09:02 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:02.270145 | orchestrator | 2025-06-22 20:09:02 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:02.272846 | orchestrator | 2025-06-22 20:09:02 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:02.272880 | orchestrator | 2025-06-22 20:09:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:05.316394 | orchestrator | 2025-06-22 20:09:05 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state STARTED 2025-06-22 20:09:05.317692 | orchestrator | 2025-06-22 20:09:05 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:05.318977 | orchestrator | 2025-06-22 20:09:05 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:05.319896 | orchestrator | 2025-06-22 20:09:05 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:05.319920 | orchestrator | 2025-06-22 20:09:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:08.370690 | orchestrator | 2025-06-22 20:09:08.370811 | orchestrator | 2025-06-22 20:09:08 | INFO  | Task dcb46c2d-2182-43e8-a69d-b7a53e5eb9a2 is in state SUCCESS 2025-06-22 20:09:08.372867 | orchestrator | 2025-06-22 20:09:08.372896 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:09:08.372902 | orchestrator | 2025-06-22 20:09:08.372906 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:09:08.372911 | orchestrator | Sunday 22 June 2025 20:05:47 +0000 (0:00:00.228) 0:00:00.228 *********** 2025-06-22 20:09:08.372915 | orchestrator | ok: [testbed-manager] 2025-06-22 20:09:08.372969 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:09:08.372975 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:09:08.372979 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:09:08.373018 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:09:08.373028 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:09:08.373032 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:09:08.373041 | orchestrator | 2025-06-22 20:09:08.373131 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:09:08.373210 | orchestrator | Sunday 22 June 2025 20:05:48 +0000 (0:00:00.611) 0:00:00.840 *********** 2025-06-22 20:09:08.373218 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-22 20:09:08.373223 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-22 20:09:08.373227 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-22 20:09:08.373231 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-22 20:09:08.373235 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-22 20:09:08.373239 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-22 20:09:08.373244 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-22 20:09:08.373248 | orchestrator | 2025-06-22 20:09:08.373253 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-22 20:09:08.373256 | orchestrator | 2025-06-22 20:09:08.373272 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-22 20:09:08.373276 | orchestrator | Sunday 22 June 2025 20:05:48 +0000 (0:00:00.542) 0:00:01.383 *********** 2025-06-22 20:09:08.373281 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:09:08.373303 | orchestrator | 2025-06-22 20:09:08.373307 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-22 20:09:08.373311 | orchestrator | Sunday 22 June 2025 20:05:50 +0000 (0:00:01.236) 0:00:02.619 *********** 2025-06-22 20:09:08.373317 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 20:09:08.373340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373346 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373369 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373381 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373399 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373407 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373429 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 20:09:08.373437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373453 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373484 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373488 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373722 | orchestrator | 2025-06-22 20:09:08.373727 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-22 20:09:08.373731 | orchestrator | Sunday 22 June 2025 20:05:53 +0000 (0:00:03.542) 0:00:06.161 *********** 2025-06-22 20:09:08.373735 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:09:08.373739 | orchestrator | 2025-06-22 20:09:08.373744 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-22 20:09:08.373747 | orchestrator | Sunday 22 June 2025 20:05:55 +0000 (0:00:01.415) 0:00:07.576 *********** 2025-06-22 20:09:08.373752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373760 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 20:09:08.373769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373782 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373803 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373837 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 20:09:08.373842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.373860 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.373871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.373883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.374082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.374097 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.374105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.374139 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.374143 | orchestrator | 2025-06-22 20:09:08.374147 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-22 20:09:08.374152 | orchestrator | Sunday 22 June 2025 20:06:02 +0000 (0:00:07.603) 0:00:15.180 *********** 2025-06-22 20:09:08.374156 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 20:09:08.374160 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.374165 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.374174 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 20:09:08.374185 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374190 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:09:08.374194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.374198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.374210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.374225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.374239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.374247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.374264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.374275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.374279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.374283 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:08.374287 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:08.374290 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:08.374294 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:08.374298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.374305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.374312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.374322 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:08.374326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.374332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.374354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.374358 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:08.374362 | orchestrator | 2025-06-22 20:09:08.374397 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-22 20:09:08.374418 | orchestrator | Sunday 22 June 2025 20:06:04 +0000 (0:00:02.117) 0:00:17.298 *********** 2025-06-22 20:09:08.374422 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 20:09:08.374461 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.374465 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.374474 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 20:09:08.374517 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.374948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.374989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.375011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.375016 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:09:08.375020 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:08.375025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.375028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.375037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.375041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.375106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.375121 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:08.375125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.375129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.375139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.375143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.375150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.375154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.375158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.375165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:09:08.375169 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:08.375173 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:08.375177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.375851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.375872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.375876 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:08.375881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:09:08.375890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.375895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:09:08.375905 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:08.375909 | orchestrator | 2025-06-22 20:09:08.375913 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-22 20:09:08.375952 | orchestrator | Sunday 22 June 2025 20:06:06 +0000 (0:00:02.126) 0:00:19.424 *********** 2025-06-22 20:09:08.375958 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 20:09:08.375963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.376227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.376241 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.376249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.376254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.376263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.376267 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.376272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.376276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.376307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.376339 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 20:09:08.376358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.376383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.376387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.376391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.376396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.376404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.376408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.376415 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.376422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.376426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.376430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.376434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.376440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.376445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.376449 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.376459 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.376463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.376467 | orchestrator | 2025-06-22 20:09:08.376471 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-22 20:09:08.376489 | orchestrator | Sunday 22 June 2025 20:06:15 +0000 (0:00:08.813) 0:00:28.237 *********** 2025-06-22 20:09:08.376493 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:09:08.376497 | orchestrator | 2025-06-22 20:09:08.376501 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-22 20:09:08.376505 | orchestrator | Sunday 22 June 2025 20:06:17 +0000 (0:00:01.299) 0:00:29.537 *********** 2025-06-22 20:09:08.376509 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068790, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376513 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068790, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376520 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068790, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376524 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1068779, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376534 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1068779, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376538 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068790, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376542 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068790, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.376546 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1068779, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376550 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1068747, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4139867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376557 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1068747, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4139867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376561 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1068779, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376570 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1068747, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4139867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376574 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1068754, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376578 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1068747, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4139867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376582 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1068754, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376586 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1068754, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376594 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1068754, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376599 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1068773, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376623 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1068779, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.376629 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1068773, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376633 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1068773, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376637 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1068773, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376646 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1068759, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4159865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376654 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1068759, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4159865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376661 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1068759, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4159865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376667 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1068770, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376671 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1068747, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4139867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.376675 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1068759, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4159865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376679 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1068770, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376683 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1068770, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376689 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1068770, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376697 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1068781, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376703 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1068781, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376707 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1068781, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376711 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1068789, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4209867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376716 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1068754, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.376719 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1068781, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376726 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1068789, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4209867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376733 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1068800, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376739 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1068789, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4209867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376743 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1068789, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4209867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376747 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1068800, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376751 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1068800, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376755 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1068785, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376765 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1068785, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376769 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1068800, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376774 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1068773, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.376780 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1068785, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376784 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068757, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376788 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1068785, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376792 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068757, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376799 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068757, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376806 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068757, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376810 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1068768, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376816 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1068768, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376820 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1068768, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376824 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1068768, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376828 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1068759, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4159865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.376835 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068745, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4129865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376842 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068745, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4129865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376846 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068745, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4129865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376852 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1068777, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376856 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1068777, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376860 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068745, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4129865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376864 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1068777, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376874 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1068798, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376880 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1068798, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376885 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1068798, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376892 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1068777, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376896 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1068763, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376900 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1068763, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376907 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1068770, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.376911 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1068763, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376918 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1068791, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376922 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:08.376926 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1068791, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376932 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1068798, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376936 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:08.376940 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1068791, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376944 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:08.376948 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068790, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376955 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1068763, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376961 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068790, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376965 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1068779, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376969 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1068791, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376975 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:08.376979 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1068747, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4139867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376983 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1068779, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376989 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1068781, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.376993 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1068747, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4139867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.376999 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1068754, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377003 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1068773, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377007 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1068754, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377012 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1068759, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4159865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377017 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1068773, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377023 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1068789, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4209867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.377027 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1068770, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377031 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1068759, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4159865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377172 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1068781, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377180 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1068770, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4179866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377187 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1068789, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4209867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377191 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1068781, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377198 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1068800, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377202 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1068789, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4209867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377206 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1068800, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.377213 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1068785, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377217 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1068800, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377223 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068757, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377232 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1068785, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377236 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1068768, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377240 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068757, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377244 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068745, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4129865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377250 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1068768, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377255 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1068785, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4199867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.377260 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1068777, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377267 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068745, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4129865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377271 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1068798, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377275 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1068777, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377279 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1068763, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377285 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1068798, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377289 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1068791, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377293 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:08.377299 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1068763, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377308 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068757, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4149866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.377312 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1068791, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:09:08.377316 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:08.377320 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1068768, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.377323 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1068745, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4129865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.377329 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1068777, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4189868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.377334 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1068798, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4239867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.377343 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1068763, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4169867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.377347 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1068791, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4219868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:09:08.377351 | orchestrator | 2025-06-22 20:09:08.377355 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-22 20:09:08.377359 | orchestrator | Sunday 22 June 2025 20:06:39 +0000 (0:00:22.237) 0:00:51.774 *********** 2025-06-22 20:09:08.377363 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:09:08.377367 | orchestrator | 2025-06-22 20:09:08.377371 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-22 20:09:08.377375 | orchestrator | Sunday 22 June 2025 20:06:40 +0000 (0:00:00.749) 0:00:52.523 *********** 2025-06-22 20:09:08.377379 | orchestrator | [WARNING]: Skipped 2025-06-22 20:09:08.377383 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377387 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-22 20:09:08.377390 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377394 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-22 20:09:08.377398 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:09:08.377402 | orchestrator | [WARNING]: Skipped 2025-06-22 20:09:08.377406 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377410 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-22 20:09:08.377413 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377417 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-22 20:09:08.377421 | orchestrator | [WARNING]: Skipped 2025-06-22 20:09:08.377425 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377429 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-22 20:09:08.377433 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377436 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-22 20:09:08.377440 | orchestrator | [WARNING]: Skipped 2025-06-22 20:09:08.377444 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377448 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-22 20:09:08.377451 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377455 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-22 20:09:08.377459 | orchestrator | [WARNING]: Skipped 2025-06-22 20:09:08.377464 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377468 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-22 20:09:08.377476 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377480 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-22 20:09:08.377484 | orchestrator | [WARNING]: Skipped 2025-06-22 20:09:08.377488 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377491 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-22 20:09:08.377495 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377499 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-22 20:09:08.377503 | orchestrator | [WARNING]: Skipped 2025-06-22 20:09:08.377506 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377510 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-22 20:09:08.377514 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:09:08.377518 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-22 20:09:08.377521 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:09:08.377525 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 20:09:08.377529 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 20:09:08.377533 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 20:09:08.377536 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 20:09:08.377540 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 20:09:08.377544 | orchestrator | 2025-06-22 20:09:08.377550 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-22 20:09:08.377554 | orchestrator | Sunday 22 June 2025 20:06:41 +0000 (0:00:01.713) 0:00:54.236 *********** 2025-06-22 20:09:08.377557 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:09:08.377562 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:08.377565 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:09:08.377569 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:08.377573 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:09:08.377577 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:08.377581 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:09:08.377584 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:08.377588 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:09:08.377592 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:08.377596 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:09:08.377599 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:08.377603 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-22 20:09:08.377607 | orchestrator | 2025-06-22 20:09:08.377611 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-22 20:09:08.377614 | orchestrator | Sunday 22 June 2025 20:06:54 +0000 (0:00:12.918) 0:01:07.155 *********** 2025-06-22 20:09:08.377618 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:09:08.377622 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:08.377626 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:09:08.377630 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:08.377633 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:09:08.377637 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:08.377641 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:09:08.377647 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:08.377651 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:09:08.377655 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:08.377659 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:09:08.377662 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:08.377666 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-22 20:09:08.377670 | orchestrator | 2025-06-22 20:09:08.377674 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-22 20:09:08.377677 | orchestrator | Sunday 22 June 2025 20:06:57 +0000 (0:00:02.668) 0:01:09.823 *********** 2025-06-22 20:09:08.377681 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:09:08.377686 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:09:08.377689 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:08.377693 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:08.377697 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:09:08.377701 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:08.377706 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:09:08.377710 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:08.377714 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:09:08.377718 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:08.377722 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-22 20:09:08.377726 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:09:08.377729 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:08.377733 | orchestrator | 2025-06-22 20:09:08.377737 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-22 20:09:08.377741 | orchestrator | Sunday 22 June 2025 20:06:58 +0000 (0:00:01.306) 0:01:11.129 *********** 2025-06-22 20:09:08.377745 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:09:08.377748 | orchestrator | 2025-06-22 20:09:08.377752 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-22 20:09:08.377756 | orchestrator | Sunday 22 June 2025 20:06:59 +0000 (0:00:00.646) 0:01:11.776 *********** 2025-06-22 20:09:08.377760 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:09:08.377763 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:08.377767 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:08.377771 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:08.377777 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:08.377780 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:08.377784 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:08.377788 | orchestrator | 2025-06-22 20:09:08.377792 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-22 20:09:08.377795 | orchestrator | Sunday 22 June 2025 20:07:00 +0000 (0:00:00.704) 0:01:12.480 *********** 2025-06-22 20:09:08.377799 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:09:08.377803 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:08.377807 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:08.377811 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:08.377816 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:09:08.377823 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:09:08.377827 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:09:08.377831 | orchestrator | 2025-06-22 20:09:08.377836 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-22 20:09:08.377840 | orchestrator | Sunday 22 June 2025 20:07:01 +0000 (0:00:01.935) 0:01:14.416 *********** 2025-06-22 20:09:08.377845 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:09:08.377849 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:09:08.377853 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:08.377858 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:09:08.377862 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:09:08.377866 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:09:08.377871 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:08.377875 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:08.377879 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:09:08.377883 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:08.377888 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:09:08.377892 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:08.377896 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:09:08.377900 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:08.377905 | orchestrator | 2025-06-22 20:09:08.377909 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-22 20:09:08.377914 | orchestrator | Sunday 22 June 2025 20:07:03 +0000 (0:00:01.364) 0:01:15.781 *********** 2025-06-22 20:09:08.377918 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:09:08.377923 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:08.377927 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:09:08.377932 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:09:08.377936 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:09:08.377940 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:08.377945 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:08.377949 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:08.377953 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-22 20:09:08.377958 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:09:08.377962 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:08.377966 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:09:08.377973 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:08.377978 | orchestrator | 2025-06-22 20:09:08.377982 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-22 20:09:08.377986 | orchestrator | Sunday 22 June 2025 20:07:04 +0000 (0:00:01.397) 0:01:17.179 *********** 2025-06-22 20:09:08.377991 | orchestrator | [WARNING]: Skipped 2025-06-22 20:09:08.377995 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-22 20:09:08.377999 | orchestrator | due to this access issue: 2025-06-22 20:09:08.378004 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-22 20:09:08.378011 | orchestrator | not a directory 2025-06-22 20:09:08.378064 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:09:08.378069 | orchestrator | 2025-06-22 20:09:08.378074 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-22 20:09:08.378078 | orchestrator | Sunday 22 June 2025 20:07:06 +0000 (0:00:01.465) 0:01:18.644 *********** 2025-06-22 20:09:08.378082 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:09:08.378087 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:08.378091 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:08.378096 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:08.378100 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:08.378105 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:08.378109 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:08.378113 | orchestrator | 2025-06-22 20:09:08.378118 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-22 20:09:08.378122 | orchestrator | Sunday 22 June 2025 20:07:07 +0000 (0:00:01.166) 0:01:19.810 *********** 2025-06-22 20:09:08.378127 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:09:08.378131 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:08.378135 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:08.378143 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:08.378148 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:08.378152 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:08.378157 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:08.378161 | orchestrator | 2025-06-22 20:09:08.378165 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-22 20:09:08.378170 | orchestrator | Sunday 22 June 2025 20:07:08 +0000 (0:00:01.114) 0:01:20.925 *********** 2025-06-22 20:09:08.378174 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 20:09:08.378179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.378183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.378187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.378198 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.378202 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.378208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.378212 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.378216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.378221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.378225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.378235 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 20:09:08.378240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.378246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.378250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.378254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.378258 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.378262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.378269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.378285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.378290 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.378298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.378302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.378306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:09:08.378310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:09:08.378317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.378324 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.378328 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.378335 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:09:08.378339 | orchestrator | 2025-06-22 20:09:08.378343 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-22 20:09:08.378347 | orchestrator | Sunday 22 June 2025 20:07:13 +0000 (0:00:04.991) 0:01:25.916 *********** 2025-06-22 20:09:08.378351 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-22 20:09:08.378354 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:09:08.378358 | orchestrator | 2025-06-22 20:09:08.378362 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:09:08.378366 | orchestrator | Sunday 22 June 2025 20:07:14 +0000 (0:00:01.083) 0:01:26.999 *********** 2025-06-22 20:09:08.378370 | orchestrator | 2025-06-22 20:09:08.378374 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:09:08.378377 | orchestrator | Sunday 22 June 2025 20:07:14 +0000 (0:00:00.075) 0:01:27.075 *********** 2025-06-22 20:09:08.378381 | orchestrator | 2025-06-22 20:09:08.378385 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:09:08.378389 | orchestrator | Sunday 22 June 2025 20:07:14 +0000 (0:00:00.310) 0:01:27.386 *********** 2025-06-22 20:09:08.378393 | orchestrator | 2025-06-22 20:09:08.378396 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:09:08.378400 | orchestrator | Sunday 22 June 2025 20:07:14 +0000 (0:00:00.075) 0:01:27.461 *********** 2025-06-22 20:09:08.378404 | orchestrator | 2025-06-22 20:09:08.378408 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:09:08.378412 | orchestrator | Sunday 22 June 2025 20:07:15 +0000 (0:00:00.068) 0:01:27.530 *********** 2025-06-22 20:09:08.378419 | orchestrator | 2025-06-22 20:09:08.378423 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:09:08.378427 | orchestrator | Sunday 22 June 2025 20:07:15 +0000 (0:00:00.069) 0:01:27.599 *********** 2025-06-22 20:09:08.378430 | orchestrator | 2025-06-22 20:09:08.378434 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:09:08.378438 | orchestrator | Sunday 22 June 2025 20:07:15 +0000 (0:00:00.068) 0:01:27.668 *********** 2025-06-22 20:09:08.378442 | orchestrator | 2025-06-22 20:09:08.378446 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-22 20:09:08.378449 | orchestrator | Sunday 22 June 2025 20:07:15 +0000 (0:00:00.096) 0:01:27.765 *********** 2025-06-22 20:09:08.378453 | orchestrator | changed: [testbed-manager] 2025-06-22 20:09:08.378457 | orchestrator | 2025-06-22 20:09:08.378461 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-22 20:09:08.378465 | orchestrator | Sunday 22 June 2025 20:07:38 +0000 (0:00:23.505) 0:01:51.270 *********** 2025-06-22 20:09:08.378468 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:09:08.378472 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:09:08.378476 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:09:08.378480 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:09:08.378483 | orchestrator | changed: [testbed-manager] 2025-06-22 20:09:08.378487 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:09:08.378491 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:09:08.378495 | orchestrator | 2025-06-22 20:09:08.378498 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-22 20:09:08.378502 | orchestrator | Sunday 22 June 2025 20:07:50 +0000 (0:00:12.044) 0:02:03.315 *********** 2025-06-22 20:09:08.378506 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:09:08.378510 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:09:08.378514 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:09:08.378517 | orchestrator | 2025-06-22 20:09:08.378521 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-22 20:09:08.378525 | orchestrator | Sunday 22 June 2025 20:08:01 +0000 (0:00:11.047) 0:02:14.363 *********** 2025-06-22 20:09:08.378529 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:09:08.378532 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:09:08.378536 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:09:08.378540 | orchestrator | 2025-06-22 20:09:08.378544 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-22 20:09:08.378548 | orchestrator | Sunday 22 June 2025 20:08:07 +0000 (0:00:05.498) 0:02:19.861 *********** 2025-06-22 20:09:08.378551 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:09:08.378557 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:09:08.378561 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:09:08.378565 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:09:08.378569 | orchestrator | changed: [testbed-manager] 2025-06-22 20:09:08.378572 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:09:08.378576 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:09:08.378580 | orchestrator | 2025-06-22 20:09:08.378584 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-22 20:09:08.378588 | orchestrator | Sunday 22 June 2025 20:08:26 +0000 (0:00:19.367) 0:02:39.229 *********** 2025-06-22 20:09:08.378591 | orchestrator | changed: [testbed-manager] 2025-06-22 20:09:08.378595 | orchestrator | 2025-06-22 20:09:08.378599 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-22 20:09:08.378603 | orchestrator | Sunday 22 June 2025 20:08:33 +0000 (0:00:07.212) 0:02:46.442 *********** 2025-06-22 20:09:08.378607 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:09:08.378610 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:09:08.378614 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:09:08.378618 | orchestrator | 2025-06-22 20:09:08.378622 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-22 20:09:08.378626 | orchestrator | Sunday 22 June 2025 20:08:45 +0000 (0:00:11.145) 0:02:57.587 *********** 2025-06-22 20:09:08.378632 | orchestrator | changed: [testbed-manager] 2025-06-22 20:09:08.378636 | orchestrator | 2025-06-22 20:09:08.378640 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-22 20:09:08.378644 | orchestrator | Sunday 22 June 2025 20:08:51 +0000 (0:00:06.815) 0:03:04.403 *********** 2025-06-22 20:09:08.378647 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:09:08.378651 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:09:08.378655 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:09:08.378659 | orchestrator | 2025-06-22 20:09:08.378665 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:09:08.378669 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 20:09:08.378674 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:09:08.378678 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:09:08.378682 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:09:08.378685 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 20:09:08.378689 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 20:09:08.378693 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 20:09:08.378697 | orchestrator | 2025-06-22 20:09:08.378701 | orchestrator | 2025-06-22 20:09:08.378705 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:09:08.378708 | orchestrator | Sunday 22 June 2025 20:09:05 +0000 (0:00:13.568) 0:03:17.971 *********** 2025-06-22 20:09:08.378712 | orchestrator | =============================================================================== 2025-06-22 20:09:08.378716 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 23.51s 2025-06-22 20:09:08.378720 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.24s 2025-06-22 20:09:08.378724 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 19.37s 2025-06-22 20:09:08.378728 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 13.57s 2025-06-22 20:09:08.378731 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 12.92s 2025-06-22 20:09:08.378735 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.04s 2025-06-22 20:09:08.378739 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.15s 2025-06-22 20:09:08.378743 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.05s 2025-06-22 20:09:08.378747 | orchestrator | prometheus : Copying over config.json files ----------------------------- 8.81s 2025-06-22 20:09:08.378750 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.60s 2025-06-22 20:09:08.378754 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.21s 2025-06-22 20:09:08.378758 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.82s 2025-06-22 20:09:08.378762 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.50s 2025-06-22 20:09:08.378765 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.99s 2025-06-22 20:09:08.378769 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.54s 2025-06-22 20:09:08.378776 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.67s 2025-06-22 20:09:08.378779 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.13s 2025-06-22 20:09:08.378783 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.12s 2025-06-22 20:09:08.378789 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.94s 2025-06-22 20:09:08.378799 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.71s 2025-06-22 20:09:08.378803 | orchestrator | 2025-06-22 20:09:08 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:08.378807 | orchestrator | 2025-06-22 20:09:08 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:08.378811 | orchestrator | 2025-06-22 20:09:08 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:08.378815 | orchestrator | 2025-06-22 20:09:08 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:08.378819 | orchestrator | 2025-06-22 20:09:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:11.436675 | orchestrator | 2025-06-22 20:09:11 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:11.437742 | orchestrator | 2025-06-22 20:09:11 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:11.439369 | orchestrator | 2025-06-22 20:09:11 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:11.441033 | orchestrator | 2025-06-22 20:09:11 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:11.441774 | orchestrator | 2025-06-22 20:09:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:14.486486 | orchestrator | 2025-06-22 20:09:14 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:14.487743 | orchestrator | 2025-06-22 20:09:14 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:14.489167 | orchestrator | 2025-06-22 20:09:14 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:14.490610 | orchestrator | 2025-06-22 20:09:14 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:14.490656 | orchestrator | 2025-06-22 20:09:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:17.533908 | orchestrator | 2025-06-22 20:09:17 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:17.535467 | orchestrator | 2025-06-22 20:09:17 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:17.537368 | orchestrator | 2025-06-22 20:09:17 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:17.540667 | orchestrator | 2025-06-22 20:09:17 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:17.540693 | orchestrator | 2025-06-22 20:09:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:20.582372 | orchestrator | 2025-06-22 20:09:20 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:20.584224 | orchestrator | 2025-06-22 20:09:20 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:20.587964 | orchestrator | 2025-06-22 20:09:20 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:20.590804 | orchestrator | 2025-06-22 20:09:20 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:20.590913 | orchestrator | 2025-06-22 20:09:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:23.633665 | orchestrator | 2025-06-22 20:09:23 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:23.635282 | orchestrator | 2025-06-22 20:09:23 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:23.636872 | orchestrator | 2025-06-22 20:09:23 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:23.638258 | orchestrator | 2025-06-22 20:09:23 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:23.638367 | orchestrator | 2025-06-22 20:09:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:26.678235 | orchestrator | 2025-06-22 20:09:26 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:26.680250 | orchestrator | 2025-06-22 20:09:26 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:26.683143 | orchestrator | 2025-06-22 20:09:26 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:26.685003 | orchestrator | 2025-06-22 20:09:26 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:26.685032 | orchestrator | 2025-06-22 20:09:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:29.718521 | orchestrator | 2025-06-22 20:09:29 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:29.718755 | orchestrator | 2025-06-22 20:09:29 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:29.719659 | orchestrator | 2025-06-22 20:09:29 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:29.720651 | orchestrator | 2025-06-22 20:09:29 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:29.720660 | orchestrator | 2025-06-22 20:09:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:32.758348 | orchestrator | 2025-06-22 20:09:32 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:32.759443 | orchestrator | 2025-06-22 20:09:32 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:32.761790 | orchestrator | 2025-06-22 20:09:32 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:32.764216 | orchestrator | 2025-06-22 20:09:32 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:32.764306 | orchestrator | 2025-06-22 20:09:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:35.808364 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:35.809778 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:35.811663 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:35.813087 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:35.813136 | orchestrator | 2025-06-22 20:09:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:38.863609 | orchestrator | 2025-06-22 20:09:38 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:38.865510 | orchestrator | 2025-06-22 20:09:38 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:38.868451 | orchestrator | 2025-06-22 20:09:38 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:38.870235 | orchestrator | 2025-06-22 20:09:38 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:38.870279 | orchestrator | 2025-06-22 20:09:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:41.908242 | orchestrator | 2025-06-22 20:09:41 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:41.909175 | orchestrator | 2025-06-22 20:09:41 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:41.909725 | orchestrator | 2025-06-22 20:09:41 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:41.910767 | orchestrator | 2025-06-22 20:09:41 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:41.910789 | orchestrator | 2025-06-22 20:09:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:44.963853 | orchestrator | 2025-06-22 20:09:44 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:44.967162 | orchestrator | 2025-06-22 20:09:44 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:44.969528 | orchestrator | 2025-06-22 20:09:44 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:44.971518 | orchestrator | 2025-06-22 20:09:44 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:44.971611 | orchestrator | 2025-06-22 20:09:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:48.019586 | orchestrator | 2025-06-22 20:09:48 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:48.022132 | orchestrator | 2025-06-22 20:09:48 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:48.023455 | orchestrator | 2025-06-22 20:09:48 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:48.025418 | orchestrator | 2025-06-22 20:09:48 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:48.025873 | orchestrator | 2025-06-22 20:09:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:51.075571 | orchestrator | 2025-06-22 20:09:51 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:51.078367 | orchestrator | 2025-06-22 20:09:51 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:51.081008 | orchestrator | 2025-06-22 20:09:51 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:51.082976 | orchestrator | 2025-06-22 20:09:51 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:51.083143 | orchestrator | 2025-06-22 20:09:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:54.128667 | orchestrator | 2025-06-22 20:09:54 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:54.133182 | orchestrator | 2025-06-22 20:09:54 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:54.135446 | orchestrator | 2025-06-22 20:09:54 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:54.137835 | orchestrator | 2025-06-22 20:09:54 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:54.137928 | orchestrator | 2025-06-22 20:09:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:57.183909 | orchestrator | 2025-06-22 20:09:57 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:09:57.189775 | orchestrator | 2025-06-22 20:09:57 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:09:57.189864 | orchestrator | 2025-06-22 20:09:57 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:09:57.191439 | orchestrator | 2025-06-22 20:09:57 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:09:57.191780 | orchestrator | 2025-06-22 20:09:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:00.237216 | orchestrator | 2025-06-22 20:10:00 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:00.240046 | orchestrator | 2025-06-22 20:10:00 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:00.242778 | orchestrator | 2025-06-22 20:10:00 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:00.246001 | orchestrator | 2025-06-22 20:10:00 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state STARTED 2025-06-22 20:10:00.246115 | orchestrator | 2025-06-22 20:10:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:03.289127 | orchestrator | 2025-06-22 20:10:03 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:03.291791 | orchestrator | 2025-06-22 20:10:03 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:03.292742 | orchestrator | 2025-06-22 20:10:03 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:03.293958 | orchestrator | 2025-06-22 20:10:03 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:03.296475 | orchestrator | 2025-06-22 20:10:03 | INFO  | Task 4697d1c4-a08f-4379-ab85-b5c716dcdf94 is in state SUCCESS 2025-06-22 20:10:03.298395 | orchestrator | 2025-06-22 20:10:03.298433 | orchestrator | 2025-06-22 20:10:03.298445 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:10:03.298457 | orchestrator | 2025-06-22 20:10:03.298468 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:10:03.298479 | orchestrator | Sunday 22 June 2025 20:07:23 +0000 (0:00:00.260) 0:00:00.260 *********** 2025-06-22 20:10:03.298491 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:03.298503 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:03.298514 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:03.298524 | orchestrator | 2025-06-22 20:10:03.298535 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:10:03.298546 | orchestrator | Sunday 22 June 2025 20:07:24 +0000 (0:00:00.283) 0:00:00.543 *********** 2025-06-22 20:10:03.298557 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-22 20:10:03.298568 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-22 20:10:03.298584 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-22 20:10:03.298603 | orchestrator | 2025-06-22 20:10:03.298621 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-22 20:10:03.298639 | orchestrator | 2025-06-22 20:10:03.298657 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 20:10:03.298676 | orchestrator | Sunday 22 June 2025 20:07:24 +0000 (0:00:00.359) 0:00:00.903 *********** 2025-06-22 20:10:03.298693 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:10:03.298705 | orchestrator | 2025-06-22 20:10:03.298716 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-22 20:10:03.298727 | orchestrator | Sunday 22 June 2025 20:07:25 +0000 (0:00:00.479) 0:00:01.382 *********** 2025-06-22 20:10:03.298737 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-22 20:10:03.298748 | orchestrator | 2025-06-22 20:10:03.298759 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-22 20:10:03.298792 | orchestrator | Sunday 22 June 2025 20:07:28 +0000 (0:00:03.215) 0:00:04.598 *********** 2025-06-22 20:10:03.298804 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-22 20:10:03.298815 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-22 20:10:03.298826 | orchestrator | 2025-06-22 20:10:03.298836 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-22 20:10:03.298847 | orchestrator | Sunday 22 June 2025 20:07:33 +0000 (0:00:05.550) 0:00:10.149 *********** 2025-06-22 20:10:03.298858 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:10:03.298869 | orchestrator | 2025-06-22 20:10:03.298880 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-22 20:10:03.298891 | orchestrator | Sunday 22 June 2025 20:07:36 +0000 (0:00:02.878) 0:00:13.027 *********** 2025-06-22 20:10:03.298902 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:10:03.298912 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-22 20:10:03.298923 | orchestrator | 2025-06-22 20:10:03.298934 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-22 20:10:03.298945 | orchestrator | Sunday 22 June 2025 20:07:40 +0000 (0:00:03.364) 0:00:16.393 *********** 2025-06-22 20:10:03.298955 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:10:03.298966 | orchestrator | 2025-06-22 20:10:03.298988 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-22 20:10:03.299000 | orchestrator | Sunday 22 June 2025 20:07:43 +0000 (0:00:03.052) 0:00:19.445 *********** 2025-06-22 20:10:03.299013 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-22 20:10:03.299026 | orchestrator | 2025-06-22 20:10:03.299038 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-22 20:10:03.299121 | orchestrator | Sunday 22 June 2025 20:07:47 +0000 (0:00:03.974) 0:00:23.419 *********** 2025-06-22 20:10:03.299169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:10:03.299188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:10:03.299219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:10:03.299234 | orchestrator | 2025-06-22 20:10:03.299247 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 20:10:03.299260 | orchestrator | Sunday 22 June 2025 20:07:49 +0000 (0:00:02.839) 0:00:26.259 *********** 2025-06-22 20:10:03.299279 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:10:03.299293 | orchestrator | 2025-06-22 20:10:03.299306 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-22 20:10:03.299318 | orchestrator | Sunday 22 June 2025 20:07:50 +0000 (0:00:00.587) 0:00:26.846 *********** 2025-06-22 20:10:03.299331 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:03.299343 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:10:03.299356 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:10:03.299375 | orchestrator | 2025-06-22 20:10:03.299388 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-22 20:10:03.299399 | orchestrator | Sunday 22 June 2025 20:07:54 +0000 (0:00:04.131) 0:00:30.978 *********** 2025-06-22 20:10:03.299409 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:10:03.299420 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:10:03.299431 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:10:03.299442 | orchestrator | 2025-06-22 20:10:03.299453 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-22 20:10:03.299463 | orchestrator | Sunday 22 June 2025 20:07:56 +0000 (0:00:01.402) 0:00:32.381 *********** 2025-06-22 20:10:03.299474 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:10:03.299485 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:10:03.299496 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:10:03.299507 | orchestrator | 2025-06-22 20:10:03.299517 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-22 20:10:03.299528 | orchestrator | Sunday 22 June 2025 20:07:57 +0000 (0:00:01.189) 0:00:33.570 *********** 2025-06-22 20:10:03.299539 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:03.299550 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:03.299561 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:03.299571 | orchestrator | 2025-06-22 20:10:03.299582 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-22 20:10:03.299593 | orchestrator | Sunday 22 June 2025 20:07:58 +0000 (0:00:00.873) 0:00:34.444 *********** 2025-06-22 20:10:03.299603 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:03.299614 | orchestrator | 2025-06-22 20:10:03.299625 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-22 20:10:03.299636 | orchestrator | Sunday 22 June 2025 20:07:58 +0000 (0:00:00.135) 0:00:34.580 *********** 2025-06-22 20:10:03.299647 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:03.299657 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:03.299668 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:03.299679 | orchestrator | 2025-06-22 20:10:03.299690 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 20:10:03.299701 | orchestrator | Sunday 22 June 2025 20:07:58 +0000 (0:00:00.311) 0:00:34.891 *********** 2025-06-22 20:10:03.299712 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:10:03.299723 | orchestrator | 2025-06-22 20:10:03.299733 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-22 20:10:03.299748 | orchestrator | Sunday 22 June 2025 20:07:59 +0000 (0:00:00.548) 0:00:35.440 *********** 2025-06-22 20:10:03.299767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:10:03.299788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:10:03.299806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:10:03.299824 | orchestrator | 2025-06-22 20:10:03.299840 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-22 20:10:03.299859 | orchestrator | Sunday 22 June 2025 20:08:02 +0000 (0:00:03.406) 0:00:38.846 *********** 2025-06-22 20:10:03.299890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:10:03.299910 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:03.299936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:10:03.299967 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:03.299998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:10:03.300019 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:03.300038 | orchestrator | 2025-06-22 20:10:03.300081 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-22 20:10:03.300101 | orchestrator | Sunday 22 June 2025 20:08:05 +0000 (0:00:03.241) 0:00:42.088 *********** 2025-06-22 20:10:03.300129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:10:03.300151 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:03.300181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:10:03.300215 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:03.300235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:10:03.300254 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:03.300271 | orchestrator | 2025-06-22 20:10:03.300291 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-22 20:10:03.300310 | orchestrator | Sunday 22 June 2025 20:08:09 +0000 (0:00:04.048) 0:00:46.136 *********** 2025-06-22 20:10:03.300330 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:03.300357 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:03.300377 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:03.300395 | orchestrator | 2025-06-22 20:10:03.300412 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-22 20:10:03.300446 | orchestrator | Sunday 22 June 2025 20:08:14 +0000 (0:00:05.086) 0:00:51.223 *********** 2025-06-22 20:10:03.300477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:10:03.300501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:10:03.300530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:10:03.300562 | orchestrator | 2025-06-22 20:10:03.300580 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-22 20:10:03.300591 | orchestrator | Sunday 22 June 2025 20:08:19 +0000 (0:00:04.882) 0:00:56.105 *********** 2025-06-22 20:10:03.300602 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:03.300613 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:10:03.300623 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:10:03.300634 | orchestrator | 2025-06-22 20:10:03.300645 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-22 20:10:03.300662 | orchestrator | Sunday 22 June 2025 20:08:26 +0000 (0:00:06.296) 0:01:02.401 *********** 2025-06-22 20:10:03.300674 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:03.300685 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:03.300696 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:03.300706 | orchestrator | 2025-06-22 20:10:03.300717 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-22 20:10:03.300728 | orchestrator | Sunday 22 June 2025 20:08:30 +0000 (0:00:04.380) 0:01:06.781 *********** 2025-06-22 20:10:03.300739 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:03.300750 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:03.300760 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:03.300771 | orchestrator | 2025-06-22 20:10:03.300782 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-22 20:10:03.300793 | orchestrator | Sunday 22 June 2025 20:08:36 +0000 (0:00:06.274) 0:01:13.056 *********** 2025-06-22 20:10:03.300804 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:03.300814 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:03.300825 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:03.300836 | orchestrator | 2025-06-22 20:10:03.300846 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-22 20:10:03.300857 | orchestrator | Sunday 22 June 2025 20:08:41 +0000 (0:00:04.344) 0:01:17.400 *********** 2025-06-22 20:10:03.300868 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:03.300879 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:03.300890 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:03.300900 | orchestrator | 2025-06-22 20:10:03.300911 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-22 20:10:03.300922 | orchestrator | Sunday 22 June 2025 20:08:44 +0000 (0:00:03.650) 0:01:21.051 *********** 2025-06-22 20:10:03.300933 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:03.300944 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:03.300954 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:03.300965 | orchestrator | 2025-06-22 20:10:03.300976 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-22 20:10:03.300992 | orchestrator | Sunday 22 June 2025 20:08:45 +0000 (0:00:00.612) 0:01:21.663 *********** 2025-06-22 20:10:03.301004 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-22 20:10:03.301015 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:03.301026 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-22 20:10:03.301037 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:03.301048 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-22 20:10:03.301092 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:03.301112 | orchestrator | 2025-06-22 20:10:03.301128 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-22 20:10:03.301140 | orchestrator | Sunday 22 June 2025 20:08:52 +0000 (0:00:07.556) 0:01:29.220 *********** 2025-06-22 20:10:03.301157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:10:03.301180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:10:03.301202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:10:03.301214 | orchestrator | 2025-06-22 20:10:03.301225 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 20:10:03.301236 | orchestrator | Sunday 22 June 2025 20:08:58 +0000 (0:00:05.705) 0:01:34.925 *********** 2025-06-22 20:10:03.301247 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:03.301258 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:03.301269 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:03.301279 | orchestrator | 2025-06-22 20:10:03.301291 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-22 20:10:03.301301 | orchestrator | Sunday 22 June 2025 20:08:58 +0000 (0:00:00.270) 0:01:35.196 *********** 2025-06-22 20:10:03.301312 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:03.301323 | orchestrator | 2025-06-22 20:10:03.301334 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-22 20:10:03.301345 | orchestrator | Sunday 22 June 2025 20:09:00 +0000 (0:00:01.980) 0:01:37.176 *********** 2025-06-22 20:10:03.301356 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:03.301367 | orchestrator | 2025-06-22 20:10:03.301378 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-22 20:10:03.301389 | orchestrator | Sunday 22 June 2025 20:09:03 +0000 (0:00:02.447) 0:01:39.624 *********** 2025-06-22 20:10:03.301400 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:03.301410 | orchestrator | 2025-06-22 20:10:03.301421 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-22 20:10:03.301437 | orchestrator | Sunday 22 June 2025 20:09:05 +0000 (0:00:02.122) 0:01:41.747 *********** 2025-06-22 20:10:03.301449 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:03.301460 | orchestrator | 2025-06-22 20:10:03.301471 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-22 20:10:03.301482 | orchestrator | Sunday 22 June 2025 20:09:34 +0000 (0:00:28.680) 0:02:10.428 *********** 2025-06-22 20:10:03.301499 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:03.301510 | orchestrator | 2025-06-22 20:10:03.301521 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-22 20:10:03.301532 | orchestrator | Sunday 22 June 2025 20:09:36 +0000 (0:00:02.407) 0:02:12.835 *********** 2025-06-22 20:10:03.301543 | orchestrator | 2025-06-22 20:10:03.301554 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-22 20:10:03.301565 | orchestrator | Sunday 22 June 2025 20:09:36 +0000 (0:00:00.057) 0:02:12.893 *********** 2025-06-22 20:10:03.301576 | orchestrator | 2025-06-22 20:10:03.301587 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-22 20:10:03.301598 | orchestrator | Sunday 22 June 2025 20:09:36 +0000 (0:00:00.060) 0:02:12.953 *********** 2025-06-22 20:10:03.301609 | orchestrator | 2025-06-22 20:10:03.301620 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-22 20:10:03.301631 | orchestrator | Sunday 22 June 2025 20:09:36 +0000 (0:00:00.059) 0:02:13.013 *********** 2025-06-22 20:10:03.301642 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:03.301653 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:10:03.301663 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:10:03.301674 | orchestrator | 2025-06-22 20:10:03.301686 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:10:03.301726 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 20:10:03.301738 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:10:03.301750 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:10:03.301761 | orchestrator | 2025-06-22 20:10:03.301772 | orchestrator | 2025-06-22 20:10:03.301783 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:10:03.301794 | orchestrator | Sunday 22 June 2025 20:10:01 +0000 (0:00:24.890) 0:02:37.904 *********** 2025-06-22 20:10:03.301805 | orchestrator | =============================================================================== 2025-06-22 20:10:03.301815 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.68s 2025-06-22 20:10:03.301826 | orchestrator | glance : Restart glance-api container ---------------------------------- 24.89s 2025-06-22 20:10:03.301837 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 7.56s 2025-06-22 20:10:03.301848 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.30s 2025-06-22 20:10:03.301859 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.27s 2025-06-22 20:10:03.301869 | orchestrator | glance : Check glance containers ---------------------------------------- 5.71s 2025-06-22 20:10:03.301885 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.55s 2025-06-22 20:10:03.301896 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.09s 2025-06-22 20:10:03.301906 | orchestrator | glance : Copying over config.json files for services -------------------- 4.88s 2025-06-22 20:10:03.301917 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.38s 2025-06-22 20:10:03.301928 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.34s 2025-06-22 20:10:03.301939 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.13s 2025-06-22 20:10:03.301949 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.05s 2025-06-22 20:10:03.301960 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.97s 2025-06-22 20:10:03.301971 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.65s 2025-06-22 20:10:03.301982 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.41s 2025-06-22 20:10:03.301999 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.37s 2025-06-22 20:10:03.302010 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.24s 2025-06-22 20:10:03.302098 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.22s 2025-06-22 20:10:03.302110 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.05s 2025-06-22 20:10:03.302122 | orchestrator | 2025-06-22 20:10:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:06.349959 | orchestrator | 2025-06-22 20:10:06 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:06.352656 | orchestrator | 2025-06-22 20:10:06 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:06.354615 | orchestrator | 2025-06-22 20:10:06 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:06.356351 | orchestrator | 2025-06-22 20:10:06 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:06.357021 | orchestrator | 2025-06-22 20:10:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:09.398588 | orchestrator | 2025-06-22 20:10:09 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:09.398818 | orchestrator | 2025-06-22 20:10:09 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:09.399589 | orchestrator | 2025-06-22 20:10:09 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:09.400408 | orchestrator | 2025-06-22 20:10:09 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:09.400435 | orchestrator | 2025-06-22 20:10:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:12.445934 | orchestrator | 2025-06-22 20:10:12 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:12.447362 | orchestrator | 2025-06-22 20:10:12 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:12.451120 | orchestrator | 2025-06-22 20:10:12 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:12.452659 | orchestrator | 2025-06-22 20:10:12 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:12.453159 | orchestrator | 2025-06-22 20:10:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:15.494854 | orchestrator | 2025-06-22 20:10:15 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:15.496646 | orchestrator | 2025-06-22 20:10:15 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:15.498125 | orchestrator | 2025-06-22 20:10:15 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:15.500544 | orchestrator | 2025-06-22 20:10:15 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:15.500597 | orchestrator | 2025-06-22 20:10:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:18.542535 | orchestrator | 2025-06-22 20:10:18 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:18.544895 | orchestrator | 2025-06-22 20:10:18 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:18.546600 | orchestrator | 2025-06-22 20:10:18 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:18.548148 | orchestrator | 2025-06-22 20:10:18 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:18.548204 | orchestrator | 2025-06-22 20:10:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:21.593671 | orchestrator | 2025-06-22 20:10:21 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:21.596108 | orchestrator | 2025-06-22 20:10:21 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:21.598130 | orchestrator | 2025-06-22 20:10:21 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:21.600158 | orchestrator | 2025-06-22 20:10:21 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:21.600184 | orchestrator | 2025-06-22 20:10:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:24.644566 | orchestrator | 2025-06-22 20:10:24 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:24.646892 | orchestrator | 2025-06-22 20:10:24 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:24.648482 | orchestrator | 2025-06-22 20:10:24 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:24.649985 | orchestrator | 2025-06-22 20:10:24 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:24.650008 | orchestrator | 2025-06-22 20:10:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:27.696984 | orchestrator | 2025-06-22 20:10:27 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:27.699883 | orchestrator | 2025-06-22 20:10:27 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:27.701950 | orchestrator | 2025-06-22 20:10:27 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:27.704276 | orchestrator | 2025-06-22 20:10:27 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:27.704323 | orchestrator | 2025-06-22 20:10:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:30.746004 | orchestrator | 2025-06-22 20:10:30 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:30.747756 | orchestrator | 2025-06-22 20:10:30 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:30.749437 | orchestrator | 2025-06-22 20:10:30 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:30.750749 | orchestrator | 2025-06-22 20:10:30 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:30.751180 | orchestrator | 2025-06-22 20:10:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:33.793520 | orchestrator | 2025-06-22 20:10:33 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:33.793866 | orchestrator | 2025-06-22 20:10:33 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:33.794620 | orchestrator | 2025-06-22 20:10:33 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:33.795721 | orchestrator | 2025-06-22 20:10:33 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:33.795770 | orchestrator | 2025-06-22 20:10:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:36.835809 | orchestrator | 2025-06-22 20:10:36 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:36.838347 | orchestrator | 2025-06-22 20:10:36 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:36.840596 | orchestrator | 2025-06-22 20:10:36 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:36.843535 | orchestrator | 2025-06-22 20:10:36 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:36.843574 | orchestrator | 2025-06-22 20:10:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:39.891568 | orchestrator | 2025-06-22 20:10:39 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:39.892939 | orchestrator | 2025-06-22 20:10:39 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:39.895038 | orchestrator | 2025-06-22 20:10:39 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:39.896548 | orchestrator | 2025-06-22 20:10:39 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:39.896565 | orchestrator | 2025-06-22 20:10:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:42.951141 | orchestrator | 2025-06-22 20:10:42 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:42.951670 | orchestrator | 2025-06-22 20:10:42 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:42.952839 | orchestrator | 2025-06-22 20:10:42 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:42.954258 | orchestrator | 2025-06-22 20:10:42 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:42.954287 | orchestrator | 2025-06-22 20:10:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:45.992852 | orchestrator | 2025-06-22 20:10:45 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:45.993375 | orchestrator | 2025-06-22 20:10:45 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:46.000784 | orchestrator | 2025-06-22 20:10:45 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:46.001698 | orchestrator | 2025-06-22 20:10:45 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:46.001721 | orchestrator | 2025-06-22 20:10:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:49.067384 | orchestrator | 2025-06-22 20:10:49 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:49.068888 | orchestrator | 2025-06-22 20:10:49 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:49.070491 | orchestrator | 2025-06-22 20:10:49 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:49.072415 | orchestrator | 2025-06-22 20:10:49 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state STARTED 2025-06-22 20:10:49.072718 | orchestrator | 2025-06-22 20:10:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:52.105010 | orchestrator | 2025-06-22 20:10:52 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:52.106411 | orchestrator | 2025-06-22 20:10:52 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:52.108371 | orchestrator | 2025-06-22 20:10:52 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:10:52.109855 | orchestrator | 2025-06-22 20:10:52 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:52.112897 | orchestrator | 2025-06-22 20:10:52 | INFO  | Task a8d9d82b-2f4d-4a01-8d2c-85f4fc489185 is in state SUCCESS 2025-06-22 20:10:52.115175 | orchestrator | 2025-06-22 20:10:52.115365 | orchestrator | 2025-06-22 20:10:52.115389 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:10:52.115426 | orchestrator | 2025-06-22 20:10:52.115444 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:10:52.115463 | orchestrator | Sunday 22 June 2025 20:07:37 +0000 (0:00:00.270) 0:00:00.270 *********** 2025-06-22 20:10:52.115477 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:52.115488 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:52.115501 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:52.115519 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:10:52.115530 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:10:52.115541 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:10:52.115571 | orchestrator | 2025-06-22 20:10:52.115590 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:10:52.115608 | orchestrator | Sunday 22 June 2025 20:07:38 +0000 (0:00:00.762) 0:00:01.033 *********** 2025-06-22 20:10:52.115627 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-22 20:10:52.115667 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-22 20:10:52.115721 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-22 20:10:52.115735 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-22 20:10:52.115747 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-22 20:10:52.115760 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-22 20:10:52.115779 | orchestrator | 2025-06-22 20:10:52.115800 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-22 20:10:52.115812 | orchestrator | 2025-06-22 20:10:52.115824 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:10:52.115844 | orchestrator | Sunday 22 June 2025 20:07:39 +0000 (0:00:00.779) 0:00:01.812 *********** 2025-06-22 20:10:52.115864 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:10:52.115884 | orchestrator | 2025-06-22 20:10:52.115899 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-22 20:10:52.115910 | orchestrator | Sunday 22 June 2025 20:07:41 +0000 (0:00:01.949) 0:00:03.762 *********** 2025-06-22 20:10:52.115922 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-22 20:10:52.115937 | orchestrator | 2025-06-22 20:10:52.115955 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-22 20:10:52.115973 | orchestrator | Sunday 22 June 2025 20:07:44 +0000 (0:00:03.336) 0:00:07.099 *********** 2025-06-22 20:10:52.115985 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-22 20:10:52.116008 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-22 20:10:52.116027 | orchestrator | 2025-06-22 20:10:52.116042 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-22 20:10:52.116080 | orchestrator | Sunday 22 June 2025 20:07:51 +0000 (0:00:06.791) 0:00:13.891 *********** 2025-06-22 20:10:52.116091 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:10:52.116108 | orchestrator | 2025-06-22 20:10:52.116127 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-22 20:10:52.116148 | orchestrator | Sunday 22 June 2025 20:07:54 +0000 (0:00:03.393) 0:00:17.284 *********** 2025-06-22 20:10:52.116165 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:10:52.116244 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-22 20:10:52.116260 | orchestrator | 2025-06-22 20:10:52.116272 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-22 20:10:52.116282 | orchestrator | Sunday 22 June 2025 20:07:58 +0000 (0:00:03.982) 0:00:21.267 *********** 2025-06-22 20:10:52.116293 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:10:52.116303 | orchestrator | 2025-06-22 20:10:52.116331 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-22 20:10:52.116343 | orchestrator | Sunday 22 June 2025 20:08:02 +0000 (0:00:03.998) 0:00:25.266 *********** 2025-06-22 20:10:52.116361 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-22 20:10:52.116374 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-22 20:10:52.116430 | orchestrator | 2025-06-22 20:10:52.116445 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-22 20:10:52.116455 | orchestrator | Sunday 22 June 2025 20:08:11 +0000 (0:00:08.491) 0:00:33.757 *********** 2025-06-22 20:10:52.116490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.116516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.116537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.116565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.116593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.116607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.116639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.116659 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.116686 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.116705 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.116732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.116760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.116781 | orchestrator | 2025-06-22 20:10:52.116802 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:10:52.116820 | orchestrator | Sunday 22 June 2025 20:08:14 +0000 (0:00:03.118) 0:00:36.876 *********** 2025-06-22 20:10:52.116838 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:52.116855 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:52.116874 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:52.116893 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:52.116909 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:52.116922 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:52.116940 | orchestrator | 2025-06-22 20:10:52.116955 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:10:52.116971 | orchestrator | Sunday 22 June 2025 20:08:15 +0000 (0:00:00.815) 0:00:37.692 *********** 2025-06-22 20:10:52.116989 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:52.117000 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:52.117017 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:52.117033 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:10:52.117103 | orchestrator | 2025-06-22 20:10:52.117119 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-22 20:10:52.117136 | orchestrator | Sunday 22 June 2025 20:08:17 +0000 (0:00:01.782) 0:00:39.474 *********** 2025-06-22 20:10:52.117147 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-22 20:10:52.117158 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-22 20:10:52.117169 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-22 20:10:52.117179 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-22 20:10:52.117190 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-22 20:10:52.117201 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-22 20:10:52.117220 | orchestrator | 2025-06-22 20:10:52.117231 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-22 20:10:52.117241 | orchestrator | Sunday 22 June 2025 20:08:18 +0000 (0:00:01.845) 0:00:41.320 *********** 2025-06-22 20:10:52.117258 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:10:52.117271 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:10:52.117290 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:10:52.117302 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:10:52.117314 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:10:52.117336 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:10:52.117348 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:10:52.117365 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:10:52.117377 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:10:52.117389 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:10:52.117412 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:10:52.117423 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:10:52.117434 | orchestrator | 2025-06-22 20:10:52.117445 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-22 20:10:52.117456 | orchestrator | Sunday 22 June 2025 20:08:22 +0000 (0:00:03.366) 0:00:44.686 *********** 2025-06-22 20:10:52.117467 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:10:52.117478 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:10:52.117490 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:10:52.117500 | orchestrator | 2025-06-22 20:10:52.117511 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-22 20:10:52.117522 | orchestrator | Sunday 22 June 2025 20:08:24 +0000 (0:00:02.290) 0:00:46.977 *********** 2025-06-22 20:10:52.117539 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-22 20:10:52.117550 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-22 20:10:52.117561 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-22 20:10:52.117572 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:10:52.117582 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:10:52.117593 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:10:52.117604 | orchestrator | 2025-06-22 20:10:52.117614 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-22 20:10:52.117625 | orchestrator | Sunday 22 June 2025 20:08:27 +0000 (0:00:03.310) 0:00:50.288 *********** 2025-06-22 20:10:52.117650 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-22 20:10:52.117661 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-22 20:10:52.117671 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-22 20:10:52.117682 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-22 20:10:52.117693 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-22 20:10:52.117704 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-22 20:10:52.117714 | orchestrator | 2025-06-22 20:10:52.117725 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-22 20:10:52.117736 | orchestrator | Sunday 22 June 2025 20:08:29 +0000 (0:00:01.226) 0:00:51.515 *********** 2025-06-22 20:10:52.117747 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:52.117757 | orchestrator | 2025-06-22 20:10:52.117768 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-22 20:10:52.117779 | orchestrator | Sunday 22 June 2025 20:08:29 +0000 (0:00:00.510) 0:00:52.025 *********** 2025-06-22 20:10:52.117790 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:52.117800 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:52.117811 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:52.117822 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:52.117837 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:52.117852 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:52.117864 | orchestrator | 2025-06-22 20:10:52.117882 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:10:52.117893 | orchestrator | Sunday 22 June 2025 20:08:30 +0000 (0:00:01.134) 0:00:53.159 *********** 2025-06-22 20:10:52.117905 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:10:52.117917 | orchestrator | 2025-06-22 20:10:52.117932 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-22 20:10:52.117943 | orchestrator | Sunday 22 June 2025 20:08:32 +0000 (0:00:01.696) 0:00:54.856 *********** 2025-06-22 20:10:52.117954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.117966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.117994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.118007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.118141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.118158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.118169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.118880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.118966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.118992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.119028 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.119075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.119096 | orchestrator | 2025-06-22 20:10:52.119118 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-22 20:10:52.119138 | orchestrator | Sunday 22 June 2025 20:08:36 +0000 (0:00:03.598) 0:00:58.454 *********** 2025-06-22 20:10:52.119179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:10:52.119230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:10:52.119262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:10:52.119286 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:52.119298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119316 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:52.119328 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:52.119348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119372 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:52.119388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119412 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:52.119425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119462 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:52.119475 | orchestrator | 2025-06-22 20:10:52.119488 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-22 20:10:52.119501 | orchestrator | Sunday 22 June 2025 20:08:37 +0000 (0:00:01.723) 0:01:00.178 *********** 2025-06-22 20:10:52.119514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:10:52.119532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119546 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:52.119557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:10:52.119575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119586 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:52.119604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:10:52.119619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119639 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:52.119664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119766 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:52.119786 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:52.119807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.119838 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:52.119849 | orchestrator | 2025-06-22 20:10:52.119861 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-22 20:10:52.119875 | orchestrator | Sunday 22 June 2025 20:08:39 +0000 (0:00:01.578) 0:01:01.756 *********** 2025-06-22 20:10:52.119899 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.119937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.119967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.119988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.120016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120037 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120165 | orchestrator | 2025-06-22 20:10:52.120176 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-22 20:10:52.120187 | orchestrator | Sunday 22 June 2025 20:08:42 +0000 (0:00:02.817) 0:01:04.574 *********** 2025-06-22 20:10:52.120198 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-22 20:10:52.120209 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:52.120220 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-22 20:10:52.120232 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:52.120242 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-22 20:10:52.120254 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-22 20:10:52.120265 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:52.120275 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-22 20:10:52.120293 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-22 20:10:52.120305 | orchestrator | 2025-06-22 20:10:52.120316 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-22 20:10:52.120327 | orchestrator | Sunday 22 June 2025 20:08:44 +0000 (0:00:02.289) 0:01:06.863 *********** 2025-06-22 20:10:52.120339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.120355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.120373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.120385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120404 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.120519 | orchestrator | 2025-06-22 20:10:52.120530 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-22 20:10:52.120542 | orchestrator | Sunday 22 June 2025 20:08:56 +0000 (0:00:11.866) 0:01:18.730 *********** 2025-06-22 20:10:52.120553 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:52.120564 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:52.120575 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:52.120586 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:10:52.120596 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:10:52.120607 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:10:52.120618 | orchestrator | 2025-06-22 20:10:52.120629 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-22 20:10:52.120641 | orchestrator | Sunday 22 June 2025 20:08:58 +0000 (0:00:02.257) 0:01:20.987 *********** 2025-06-22 20:10:52.120657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:10:52.120669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.120680 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:52.120698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:10:52.120710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.120728 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:52.120740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.120756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.120768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:10:52.120780 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:52.120791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.120802 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:52.120821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.120840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.120851 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:52.120868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.120880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:10:52.120892 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:52.120903 | orchestrator | 2025-06-22 20:10:52.120914 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-22 20:10:52.120925 | orchestrator | Sunday 22 June 2025 20:08:59 +0000 (0:00:01.037) 0:01:22.025 *********** 2025-06-22 20:10:52.120936 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:52.120947 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:52.120957 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:52.120968 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:52.120979 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:52.120989 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:52.121000 | orchestrator | 2025-06-22 20:10:52.121011 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-22 20:10:52.121022 | orchestrator | Sunday 22 June 2025 20:09:00 +0000 (0:00:00.729) 0:01:22.754 *********** 2025-06-22 20:10:52.121040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.121108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.121125 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.121162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.121195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:10:52.121239 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.121259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.121291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.121313 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.121333 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.121352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.121397 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:52.121420 | orchestrator | 2025-06-22 20:10:52.121439 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:10:52.121459 | orchestrator | Sunday 22 June 2025 20:09:02 +0000 (0:00:02.212) 0:01:24.966 *********** 2025-06-22 20:10:52.121479 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:52.121497 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:52.121516 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:52.121536 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:52.121555 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:52.121575 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:52.121594 | orchestrator | 2025-06-22 20:10:52.121613 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-22 20:10:52.121632 | orchestrator | Sunday 22 June 2025 20:09:03 +0000 (0:00:00.643) 0:01:25.609 *********** 2025-06-22 20:10:52.121649 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:52.121668 | orchestrator | 2025-06-22 20:10:52.121689 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-22 20:10:52.121707 | orchestrator | Sunday 22 June 2025 20:09:05 +0000 (0:00:02.196) 0:01:27.806 *********** 2025-06-22 20:10:52.121726 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:52.121738 | orchestrator | 2025-06-22 20:10:52.121749 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-22 20:10:52.121760 | orchestrator | Sunday 22 June 2025 20:09:07 +0000 (0:00:02.289) 0:01:30.096 *********** 2025-06-22 20:10:52.121770 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:52.121781 | orchestrator | 2025-06-22 20:10:52.121792 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:10:52.121803 | orchestrator | Sunday 22 June 2025 20:09:27 +0000 (0:00:20.140) 0:01:50.236 *********** 2025-06-22 20:10:52.121814 | orchestrator | 2025-06-22 20:10:52.121832 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:10:52.121843 | orchestrator | Sunday 22 June 2025 20:09:27 +0000 (0:00:00.059) 0:01:50.296 *********** 2025-06-22 20:10:52.121854 | orchestrator | 2025-06-22 20:10:52.121865 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:10:52.121875 | orchestrator | Sunday 22 June 2025 20:09:28 +0000 (0:00:00.075) 0:01:50.371 *********** 2025-06-22 20:10:52.121886 | orchestrator | 2025-06-22 20:10:52.121898 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:10:52.121908 | orchestrator | Sunday 22 June 2025 20:09:28 +0000 (0:00:00.059) 0:01:50.430 *********** 2025-06-22 20:10:52.121919 | orchestrator | 2025-06-22 20:10:52.121930 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:10:52.121941 | orchestrator | Sunday 22 June 2025 20:09:28 +0000 (0:00:00.059) 0:01:50.490 *********** 2025-06-22 20:10:52.121952 | orchestrator | 2025-06-22 20:10:52.121963 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:10:52.121983 | orchestrator | Sunday 22 June 2025 20:09:28 +0000 (0:00:00.056) 0:01:50.546 *********** 2025-06-22 20:10:52.121994 | orchestrator | 2025-06-22 20:10:52.122005 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-22 20:10:52.122132 | orchestrator | Sunday 22 June 2025 20:09:28 +0000 (0:00:00.060) 0:01:50.607 *********** 2025-06-22 20:10:52.122166 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:52.122180 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:10:52.122191 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:10:52.122202 | orchestrator | 2025-06-22 20:10:52.122213 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-22 20:10:52.122224 | orchestrator | Sunday 22 June 2025 20:09:56 +0000 (0:00:27.989) 0:02:18.596 *********** 2025-06-22 20:10:52.122235 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:10:52.122246 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:52.122256 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:10:52.122267 | orchestrator | 2025-06-22 20:10:52.122277 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-22 20:10:52.122288 | orchestrator | Sunday 22 June 2025 20:10:06 +0000 (0:00:10.299) 0:02:28.896 *********** 2025-06-22 20:10:52.122299 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:10:52.122310 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:10:52.122319 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:10:52.122329 | orchestrator | 2025-06-22 20:10:52.122338 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-22 20:10:52.122348 | orchestrator | Sunday 22 June 2025 20:10:44 +0000 (0:00:37.733) 0:03:06.629 *********** 2025-06-22 20:10:52.122357 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:10:52.122367 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:10:52.122376 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:10:52.122386 | orchestrator | 2025-06-22 20:10:52.122395 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-22 20:10:52.122405 | orchestrator | Sunday 22 June 2025 20:10:49 +0000 (0:00:04.904) 0:03:11.533 *********** 2025-06-22 20:10:52.122415 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:52.122424 | orchestrator | 2025-06-22 20:10:52.122434 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:10:52.122507 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:10:52.122520 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:10:52.122531 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:10:52.122541 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 20:10:52.122551 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 20:10:52.122561 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 20:10:52.122571 | orchestrator | 2025-06-22 20:10:52.122581 | orchestrator | 2025-06-22 20:10:52.122591 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:10:52.122602 | orchestrator | Sunday 22 June 2025 20:10:49 +0000 (0:00:00.525) 0:03:12.059 *********** 2025-06-22 20:10:52.122620 | orchestrator | =============================================================================== 2025-06-22 20:10:52.122637 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 37.73s 2025-06-22 20:10:52.122655 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.99s 2025-06-22 20:10:52.122687 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.14s 2025-06-22 20:10:52.122705 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.87s 2025-06-22 20:10:52.122722 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.30s 2025-06-22 20:10:52.122760 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.49s 2025-06-22 20:10:52.122778 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.79s 2025-06-22 20:10:52.122794 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 4.90s 2025-06-22 20:10:52.122810 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.00s 2025-06-22 20:10:52.122834 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.98s 2025-06-22 20:10:52.122852 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.60s 2025-06-22 20:10:52.122869 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.39s 2025-06-22 20:10:52.122882 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.37s 2025-06-22 20:10:52.122891 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.34s 2025-06-22 20:10:52.122901 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.31s 2025-06-22 20:10:52.122911 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.12s 2025-06-22 20:10:52.122920 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.82s 2025-06-22 20:10:52.122930 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.29s 2025-06-22 20:10:52.122940 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.29s 2025-06-22 20:10:52.122949 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.29s 2025-06-22 20:10:52.122959 | orchestrator | 2025-06-22 20:10:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:55.162700 | orchestrator | 2025-06-22 20:10:55 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:55.163283 | orchestrator | 2025-06-22 20:10:55 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:55.163849 | orchestrator | 2025-06-22 20:10:55 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:10:55.164795 | orchestrator | 2025-06-22 20:10:55 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:55.164816 | orchestrator | 2025-06-22 20:10:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:58.215566 | orchestrator | 2025-06-22 20:10:58 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:10:58.215655 | orchestrator | 2025-06-22 20:10:58 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state STARTED 2025-06-22 20:10:58.217082 | orchestrator | 2025-06-22 20:10:58 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:10:58.218984 | orchestrator | 2025-06-22 20:10:58 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:10:58.219098 | orchestrator | 2025-06-22 20:10:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:01.264689 | orchestrator | 2025-06-22 20:11:01 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:01.267281 | orchestrator | 2025-06-22 20:11:01.267316 | orchestrator | 2025-06-22 20:11:01.267329 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:11:01.267341 | orchestrator | 2025-06-22 20:11:01.267353 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:11:01.267365 | orchestrator | Sunday 22 June 2025 20:10:05 +0000 (0:00:00.270) 0:00:00.270 *********** 2025-06-22 20:11:01.267402 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:11:01.267414 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:11:01.267425 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:11:01.267436 | orchestrator | 2025-06-22 20:11:01.267447 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:11:01.267458 | orchestrator | Sunday 22 June 2025 20:10:05 +0000 (0:00:00.295) 0:00:00.566 *********** 2025-06-22 20:11:01.267469 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-22 20:11:01.267481 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-22 20:11:01.267492 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-22 20:11:01.267503 | orchestrator | 2025-06-22 20:11:01.267514 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-22 20:11:01.267524 | orchestrator | 2025-06-22 20:11:01.267535 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-22 20:11:01.267546 | orchestrator | Sunday 22 June 2025 20:10:06 +0000 (0:00:00.473) 0:00:01.040 *********** 2025-06-22 20:11:01.267556 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:11:01.267568 | orchestrator | 2025-06-22 20:11:01.267579 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-22 20:11:01.267589 | orchestrator | Sunday 22 June 2025 20:10:06 +0000 (0:00:00.616) 0:00:01.656 *********** 2025-06-22 20:11:01.267600 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-22 20:11:01.267611 | orchestrator | 2025-06-22 20:11:01.267622 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-22 20:11:01.267632 | orchestrator | Sunday 22 June 2025 20:10:10 +0000 (0:00:03.633) 0:00:05.289 *********** 2025-06-22 20:11:01.267643 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-22 20:11:01.267654 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-22 20:11:01.267664 | orchestrator | 2025-06-22 20:11:01.267675 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-22 20:11:01.267686 | orchestrator | Sunday 22 June 2025 20:10:17 +0000 (0:00:06.652) 0:00:11.942 *********** 2025-06-22 20:11:01.267696 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:11:01.267707 | orchestrator | 2025-06-22 20:11:01.267730 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-22 20:11:01.267741 | orchestrator | Sunday 22 June 2025 20:10:20 +0000 (0:00:03.347) 0:00:15.290 *********** 2025-06-22 20:11:01.267751 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:11:01.267762 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-22 20:11:01.267773 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-22 20:11:01.267784 | orchestrator | 2025-06-22 20:11:01.267794 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-22 20:11:01.267805 | orchestrator | Sunday 22 June 2025 20:10:29 +0000 (0:00:08.434) 0:00:23.724 *********** 2025-06-22 20:11:01.267815 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:11:01.267826 | orchestrator | 2025-06-22 20:11:01.267837 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-22 20:11:01.267848 | orchestrator | Sunday 22 June 2025 20:10:32 +0000 (0:00:03.748) 0:00:27.472 *********** 2025-06-22 20:11:01.267858 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-22 20:11:01.267869 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-22 20:11:01.267880 | orchestrator | 2025-06-22 20:11:01.267893 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-22 20:11:01.267906 | orchestrator | Sunday 22 June 2025 20:10:40 +0000 (0:00:07.643) 0:00:35.116 *********** 2025-06-22 20:11:01.267926 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-22 20:11:01.267939 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-22 20:11:01.267951 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-22 20:11:01.267964 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-22 20:11:01.267975 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-22 20:11:01.267985 | orchestrator | 2025-06-22 20:11:01.267996 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-22 20:11:01.268007 | orchestrator | Sunday 22 June 2025 20:10:56 +0000 (0:00:16.314) 0:00:51.430 *********** 2025-06-22 20:11:01.268018 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:11:01.268028 | orchestrator | 2025-06-22 20:11:01.268073 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-22 20:11:01.268087 | orchestrator | Sunday 22 June 2025 20:10:57 +0000 (0:00:00.566) 0:00:51.996 *********** 2025-06-22 20:11:01.268099 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-06-22 20:11:01.268143 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1750623058.847788-6427-12035408812138/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1750623058.847788-6427-12035408812138/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1750623058.847788-6427-12035408812138/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_p3v3y7us/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_p3v3y7us/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_p3v3y7us/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_p3v3y7us/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-06-22 20:11:01.268168 | orchestrator | 2025-06-22 20:11:01.268180 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:11:01.268191 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-06-22 20:11:01.268203 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:11:01.268214 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:11:01.268225 | orchestrator | 2025-06-22 20:11:01.268236 | orchestrator | 2025-06-22 20:11:01.268247 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:11:01.268258 | orchestrator | Sunday 22 June 2025 20:11:00 +0000 (0:00:03.284) 0:00:55.281 *********** 2025-06-22 20:11:01.268275 | orchestrator | =============================================================================== 2025-06-22 20:11:01.268287 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.31s 2025-06-22 20:11:01.268298 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.43s 2025-06-22 20:11:01.268309 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.64s 2025-06-22 20:11:01.268320 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.65s 2025-06-22 20:11:01.268331 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.75s 2025-06-22 20:11:01.268342 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.63s 2025-06-22 20:11:01.268352 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.35s 2025-06-22 20:11:01.268363 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.28s 2025-06-22 20:11:01.268374 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.62s 2025-06-22 20:11:01.268385 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.57s 2025-06-22 20:11:01.268395 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-06-22 20:11:01.268406 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-06-22 20:11:01.268417 | orchestrator | 2025-06-22 20:11:01 | INFO  | Task b7a91dd6-bfab-4821-a2fd-0409a3e29a9b is in state SUCCESS 2025-06-22 20:11:01.268484 | orchestrator | 2025-06-22 20:11:01 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:01.270649 | orchestrator | 2025-06-22 20:11:01 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:01.270731 | orchestrator | 2025-06-22 20:11:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:04.317011 | orchestrator | 2025-06-22 20:11:04 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:04.318712 | orchestrator | 2025-06-22 20:11:04 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:04.320475 | orchestrator | 2025-06-22 20:11:04 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:04.320523 | orchestrator | 2025-06-22 20:11:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:07.362959 | orchestrator | 2025-06-22 20:11:07 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:07.364407 | orchestrator | 2025-06-22 20:11:07 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:07.366553 | orchestrator | 2025-06-22 20:11:07 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:07.366878 | orchestrator | 2025-06-22 20:11:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:10.403584 | orchestrator | 2025-06-22 20:11:10 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:10.408656 | orchestrator | 2025-06-22 20:11:10 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:10.408674 | orchestrator | 2025-06-22 20:11:10 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:10.408988 | orchestrator | 2025-06-22 20:11:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:13.454982 | orchestrator | 2025-06-22 20:11:13 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:13.455259 | orchestrator | 2025-06-22 20:11:13 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:13.455869 | orchestrator | 2025-06-22 20:11:13 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:13.455894 | orchestrator | 2025-06-22 20:11:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:16.501994 | orchestrator | 2025-06-22 20:11:16 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:16.504064 | orchestrator | 2025-06-22 20:11:16 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:16.507073 | orchestrator | 2025-06-22 20:11:16 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:16.507132 | orchestrator | 2025-06-22 20:11:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:19.562618 | orchestrator | 2025-06-22 20:11:19 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:19.564599 | orchestrator | 2025-06-22 20:11:19 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:19.566932 | orchestrator | 2025-06-22 20:11:19 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:19.566973 | orchestrator | 2025-06-22 20:11:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:22.610400 | orchestrator | 2025-06-22 20:11:22 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:22.613369 | orchestrator | 2025-06-22 20:11:22 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:22.615234 | orchestrator | 2025-06-22 20:11:22 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:22.615935 | orchestrator | 2025-06-22 20:11:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:25.663798 | orchestrator | 2025-06-22 20:11:25 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:25.674166 | orchestrator | 2025-06-22 20:11:25 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:25.674206 | orchestrator | 2025-06-22 20:11:25 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:25.674243 | orchestrator | 2025-06-22 20:11:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:28.726346 | orchestrator | 2025-06-22 20:11:28 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:28.728190 | orchestrator | 2025-06-22 20:11:28 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:28.730435 | orchestrator | 2025-06-22 20:11:28 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:28.730463 | orchestrator | 2025-06-22 20:11:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:31.777309 | orchestrator | 2025-06-22 20:11:31 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:31.780370 | orchestrator | 2025-06-22 20:11:31 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:31.781573 | orchestrator | 2025-06-22 20:11:31 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:31.781622 | orchestrator | 2025-06-22 20:11:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:34.835187 | orchestrator | 2025-06-22 20:11:34 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:34.835295 | orchestrator | 2025-06-22 20:11:34 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:34.837478 | orchestrator | 2025-06-22 20:11:34 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:34.837506 | orchestrator | 2025-06-22 20:11:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:37.886603 | orchestrator | 2025-06-22 20:11:37 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:37.887567 | orchestrator | 2025-06-22 20:11:37 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:37.889396 | orchestrator | 2025-06-22 20:11:37 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:37.889489 | orchestrator | 2025-06-22 20:11:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:40.925217 | orchestrator | 2025-06-22 20:11:40 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:40.926076 | orchestrator | 2025-06-22 20:11:40 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:40.927192 | orchestrator | 2025-06-22 20:11:40 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:40.927215 | orchestrator | 2025-06-22 20:11:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:43.959663 | orchestrator | 2025-06-22 20:11:43 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:43.960855 | orchestrator | 2025-06-22 20:11:43 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:43.961868 | orchestrator | 2025-06-22 20:11:43 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:43.961901 | orchestrator | 2025-06-22 20:11:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:46.998363 | orchestrator | 2025-06-22 20:11:46 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:46.999259 | orchestrator | 2025-06-22 20:11:46 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:47.000291 | orchestrator | 2025-06-22 20:11:46 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:47.000379 | orchestrator | 2025-06-22 20:11:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:50.046654 | orchestrator | 2025-06-22 20:11:50 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:50.047831 | orchestrator | 2025-06-22 20:11:50 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:50.049699 | orchestrator | 2025-06-22 20:11:50 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:50.049745 | orchestrator | 2025-06-22 20:11:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:53.090676 | orchestrator | 2025-06-22 20:11:53 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:53.092534 | orchestrator | 2025-06-22 20:11:53 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:53.094563 | orchestrator | 2025-06-22 20:11:53 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:53.094597 | orchestrator | 2025-06-22 20:11:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:56.130470 | orchestrator | 2025-06-22 20:11:56 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:56.132460 | orchestrator | 2025-06-22 20:11:56 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:56.134707 | orchestrator | 2025-06-22 20:11:56 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:56.134903 | orchestrator | 2025-06-22 20:11:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:59.169338 | orchestrator | 2025-06-22 20:11:59 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:11:59.170724 | orchestrator | 2025-06-22 20:11:59 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:11:59.172333 | orchestrator | 2025-06-22 20:11:59 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:11:59.172584 | orchestrator | 2025-06-22 20:11:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:02.216149 | orchestrator | 2025-06-22 20:12:02 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:02.217986 | orchestrator | 2025-06-22 20:12:02 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:02.219711 | orchestrator | 2025-06-22 20:12:02 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:02.219736 | orchestrator | 2025-06-22 20:12:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:05.269924 | orchestrator | 2025-06-22 20:12:05 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:05.272140 | orchestrator | 2025-06-22 20:12:05 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:05.274876 | orchestrator | 2025-06-22 20:12:05 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:05.275305 | orchestrator | 2025-06-22 20:12:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:08.323901 | orchestrator | 2025-06-22 20:12:08 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:08.325610 | orchestrator | 2025-06-22 20:12:08 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:08.327107 | orchestrator | 2025-06-22 20:12:08 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:08.327516 | orchestrator | 2025-06-22 20:12:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:11.368409 | orchestrator | 2025-06-22 20:12:11 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:11.369859 | orchestrator | 2025-06-22 20:12:11 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:11.371565 | orchestrator | 2025-06-22 20:12:11 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:11.371639 | orchestrator | 2025-06-22 20:12:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:14.417941 | orchestrator | 2025-06-22 20:12:14 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:14.418285 | orchestrator | 2025-06-22 20:12:14 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:14.419616 | orchestrator | 2025-06-22 20:12:14 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:14.419701 | orchestrator | 2025-06-22 20:12:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:17.466508 | orchestrator | 2025-06-22 20:12:17 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:17.466640 | orchestrator | 2025-06-22 20:12:17 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:17.467454 | orchestrator | 2025-06-22 20:12:17 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:17.467539 | orchestrator | 2025-06-22 20:12:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:20.510109 | orchestrator | 2025-06-22 20:12:20 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:20.512074 | orchestrator | 2025-06-22 20:12:20 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:20.514060 | orchestrator | 2025-06-22 20:12:20 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:20.514094 | orchestrator | 2025-06-22 20:12:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:23.563922 | orchestrator | 2025-06-22 20:12:23 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:23.566208 | orchestrator | 2025-06-22 20:12:23 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:23.567122 | orchestrator | 2025-06-22 20:12:23 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:23.567153 | orchestrator | 2025-06-22 20:12:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:26.605674 | orchestrator | 2025-06-22 20:12:26 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:26.605774 | orchestrator | 2025-06-22 20:12:26 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:26.606650 | orchestrator | 2025-06-22 20:12:26 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:26.606709 | orchestrator | 2025-06-22 20:12:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:29.652525 | orchestrator | 2025-06-22 20:12:29 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:29.653126 | orchestrator | 2025-06-22 20:12:29 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:29.655689 | orchestrator | 2025-06-22 20:12:29 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:29.655717 | orchestrator | 2025-06-22 20:12:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:32.701450 | orchestrator | 2025-06-22 20:12:32 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:32.703120 | orchestrator | 2025-06-22 20:12:32 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:32.704509 | orchestrator | 2025-06-22 20:12:32 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:32.704666 | orchestrator | 2025-06-22 20:12:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:35.751101 | orchestrator | 2025-06-22 20:12:35 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:35.752781 | orchestrator | 2025-06-22 20:12:35 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:35.754148 | orchestrator | 2025-06-22 20:12:35 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:35.754183 | orchestrator | 2025-06-22 20:12:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:38.801459 | orchestrator | 2025-06-22 20:12:38 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:38.802596 | orchestrator | 2025-06-22 20:12:38 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:38.804613 | orchestrator | 2025-06-22 20:12:38 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:38.804699 | orchestrator | 2025-06-22 20:12:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:41.853474 | orchestrator | 2025-06-22 20:12:41 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:41.854474 | orchestrator | 2025-06-22 20:12:41 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:41.856177 | orchestrator | 2025-06-22 20:12:41 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:41.856266 | orchestrator | 2025-06-22 20:12:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:44.900351 | orchestrator | 2025-06-22 20:12:44 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:44.902184 | orchestrator | 2025-06-22 20:12:44 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:44.903528 | orchestrator | 2025-06-22 20:12:44 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:44.903565 | orchestrator | 2025-06-22 20:12:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:47.953685 | orchestrator | 2025-06-22 20:12:47 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:47.954717 | orchestrator | 2025-06-22 20:12:47 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:47.956224 | orchestrator | 2025-06-22 20:12:47 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:47.956351 | orchestrator | 2025-06-22 20:12:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:51.012267 | orchestrator | 2025-06-22 20:12:51 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:51.012733 | orchestrator | 2025-06-22 20:12:51 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:51.014519 | orchestrator | 2025-06-22 20:12:51 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:51.014812 | orchestrator | 2025-06-22 20:12:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:54.071733 | orchestrator | 2025-06-22 20:12:54 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:54.072160 | orchestrator | 2025-06-22 20:12:54 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:54.073817 | orchestrator | 2025-06-22 20:12:54 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:54.073874 | orchestrator | 2025-06-22 20:12:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:57.118713 | orchestrator | 2025-06-22 20:12:57 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state STARTED 2025-06-22 20:12:57.121953 | orchestrator | 2025-06-22 20:12:57 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state STARTED 2025-06-22 20:12:57.125775 | orchestrator | 2025-06-22 20:12:57 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:12:57.126353 | orchestrator | 2025-06-22 20:12:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:00.174535 | orchestrator | 2025-06-22 20:13:00 | INFO  | Task d65aa069-3e81-47c2-9c8a-9233878cc65a is in state SUCCESS 2025-06-22 20:13:00.178375 | orchestrator | 2025-06-22 20:13:00.178430 | orchestrator | 2025-06-22 20:13:00.178445 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:13:00.178457 | orchestrator | 2025-06-22 20:13:00.178468 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:13:00.178480 | orchestrator | Sunday 22 June 2025 20:09:09 +0000 (0:00:00.198) 0:00:00.198 *********** 2025-06-22 20:13:00.178491 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:13:00.178503 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:13:00.178514 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:13:00.178525 | orchestrator | 2025-06-22 20:13:00.178536 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:13:00.178547 | orchestrator | Sunday 22 June 2025 20:09:09 +0000 (0:00:00.312) 0:00:00.510 *********** 2025-06-22 20:13:00.178559 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-22 20:13:00.178571 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-22 20:13:00.178582 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-22 20:13:00.178593 | orchestrator | 2025-06-22 20:13:00.178604 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-22 20:13:00.178615 | orchestrator | 2025-06-22 20:13:00.178626 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-22 20:13:00.178637 | orchestrator | Sunday 22 June 2025 20:09:10 +0000 (0:00:00.664) 0:00:01.174 *********** 2025-06-22 20:13:00.178649 | orchestrator | 2025-06-22 20:13:00.178739 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-06-22 20:13:00.178753 | orchestrator | 2025-06-22 20:13:00.178764 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-06-22 20:13:00.178775 | orchestrator | 2025-06-22 20:13:00.178786 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-06-22 20:13:00.178798 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:13:00.178809 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:13:00.178820 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:13:00.178831 | orchestrator | 2025-06-22 20:13:00.178842 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:13:00.178854 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:13:00.178868 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:13:00.178879 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:13:00.178890 | orchestrator | 2025-06-22 20:13:00.178956 | orchestrator | 2025-06-22 20:13:00.178970 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:13:00.178984 | orchestrator | Sunday 22 June 2025 20:12:58 +0000 (0:03:47.840) 0:03:49.015 *********** 2025-06-22 20:13:00.179027 | orchestrator | =============================================================================== 2025-06-22 20:13:00.179041 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 227.84s 2025-06-22 20:13:00.179054 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-06-22 20:13:00.179067 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-06-22 20:13:00.179079 | orchestrator | 2025-06-22 20:13:00.179092 | orchestrator | 2025-06-22 20:13:00 | INFO  | Task b1dc8aa0-70b3-4a30-8e81-4a4519630aa6 is in state SUCCESS 2025-06-22 20:13:00.181166 | orchestrator | 2025-06-22 20:13:00.181196 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:13:00.181208 | orchestrator | 2025-06-22 20:13:00.181220 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:13:00.181231 | orchestrator | Sunday 22 June 2025 20:10:53 +0000 (0:00:00.194) 0:00:00.194 *********** 2025-06-22 20:13:00.181242 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:13:00.181253 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:13:00.181264 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:13:00.181274 | orchestrator | 2025-06-22 20:13:00.181285 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:13:00.181296 | orchestrator | Sunday 22 June 2025 20:10:53 +0000 (0:00:00.229) 0:00:00.424 *********** 2025-06-22 20:13:00.181307 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-22 20:13:00.181318 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-22 20:13:00.181329 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-22 20:13:00.181340 | orchestrator | 2025-06-22 20:13:00.181351 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-22 20:13:00.181361 | orchestrator | 2025-06-22 20:13:00.181414 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-22 20:13:00.181426 | orchestrator | Sunday 22 June 2025 20:10:53 +0000 (0:00:00.305) 0:00:00.729 *********** 2025-06-22 20:13:00.181437 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:13:00.181537 | orchestrator | 2025-06-22 20:13:00.181565 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-22 20:13:00.181577 | orchestrator | Sunday 22 June 2025 20:10:54 +0000 (0:00:00.474) 0:00:01.203 *********** 2025-06-22 20:13:00.181591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.181606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.181702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.181734 | orchestrator | 2025-06-22 20:13:00.181746 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-22 20:13:00.181758 | orchestrator | Sunday 22 June 2025 20:10:54 +0000 (0:00:00.728) 0:00:01.931 *********** 2025-06-22 20:13:00.181771 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-22 20:13:00.181785 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-22 20:13:00.181798 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:13:00.181810 | orchestrator | 2025-06-22 20:13:00.181824 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-22 20:13:00.181836 | orchestrator | Sunday 22 June 2025 20:10:55 +0000 (0:00:00.793) 0:00:02.725 *********** 2025-06-22 20:13:00.181849 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:13:00.181862 | orchestrator | 2025-06-22 20:13:00.181874 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-22 20:13:00.181887 | orchestrator | Sunday 22 June 2025 20:10:56 +0000 (0:00:00.596) 0:00:03.322 *********** 2025-06-22 20:13:00.181939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.181963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.181976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.181989 | orchestrator | 2025-06-22 20:13:00.182002 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-22 20:13:00.182014 | orchestrator | Sunday 22 June 2025 20:10:57 +0000 (0:00:01.158) 0:00:04.480 *********** 2025-06-22 20:13:00.182089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:13:00.182104 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:00.182117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:13:00.182130 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:00.182154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:13:00.182166 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:00.182177 | orchestrator | 2025-06-22 20:13:00.182189 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-22 20:13:00.182200 | orchestrator | Sunday 22 June 2025 20:10:57 +0000 (0:00:00.349) 0:00:04.830 *********** 2025-06-22 20:13:00.182216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:13:00.182228 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:00.182239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:13:00.182257 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:00.182268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:13:00.182280 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:00.182291 | orchestrator | 2025-06-22 20:13:00.182302 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-22 20:13:00.182313 | orchestrator | Sunday 22 June 2025 20:10:58 +0000 (0:00:00.661) 0:00:05.492 *********** 2025-06-22 20:13:00.182324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.182342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.182355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.182366 | orchestrator | 2025-06-22 20:13:00.182377 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-22 20:13:00.182388 | orchestrator | Sunday 22 June 2025 20:10:59 +0000 (0:00:01.079) 0:00:06.571 *********** 2025-06-22 20:13:00.182405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.182423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.182476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.182488 | orchestrator | 2025-06-22 20:13:00.182499 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-22 20:13:00.182510 | orchestrator | Sunday 22 June 2025 20:11:00 +0000 (0:00:01.303) 0:00:07.874 *********** 2025-06-22 20:13:00.182520 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:00.182532 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:00.182548 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:00.182565 | orchestrator | 2025-06-22 20:13:00.182583 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-22 20:13:00.182605 | orchestrator | Sunday 22 June 2025 20:11:01 +0000 (0:00:00.445) 0:00:08.319 *********** 2025-06-22 20:13:00.182637 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-22 20:13:00.182652 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-22 20:13:00.182669 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-22 20:13:00.182684 | orchestrator | 2025-06-22 20:13:00.182701 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-22 20:13:00.182773 | orchestrator | Sunday 22 June 2025 20:11:02 +0000 (0:00:01.197) 0:00:09.517 *********** 2025-06-22 20:13:00.182792 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-22 20:13:00.182820 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-22 20:13:00.182832 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-22 20:13:00.182843 | orchestrator | 2025-06-22 20:13:00.182854 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-22 20:13:00.182865 | orchestrator | Sunday 22 June 2025 20:11:03 +0000 (0:00:01.143) 0:00:10.661 *********** 2025-06-22 20:13:00.182876 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:13:00.182886 | orchestrator | 2025-06-22 20:13:00.182897 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-22 20:13:00.182950 | orchestrator | Sunday 22 June 2025 20:11:04 +0000 (0:00:00.786) 0:00:11.447 *********** 2025-06-22 20:13:00.182962 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-22 20:13:00.182973 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-22 20:13:00.182997 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:13:00.183008 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:13:00.183019 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:13:00.183030 | orchestrator | 2025-06-22 20:13:00.183041 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-22 20:13:00.183052 | orchestrator | Sunday 22 June 2025 20:11:05 +0000 (0:00:00.712) 0:00:12.159 *********** 2025-06-22 20:13:00.183062 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:00.183073 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:00.183084 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:00.183095 | orchestrator | 2025-06-22 20:13:00.183112 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-22 20:13:00.183124 | orchestrator | Sunday 22 June 2025 20:11:05 +0000 (0:00:00.505) 0:00:12.665 *********** 2025-06-22 20:13:00.183137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1068662, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.382986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1068662, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.382986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1068662, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.382986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1068656, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.379986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1068656, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.379986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1068656, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.379986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1068652, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.377986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1068652, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.377986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1068652, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.377986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1068660, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3809862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1068660, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3809862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1068660, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3809862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1068637, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.369986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1068637, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.369986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1068637, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.369986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1068653, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3789861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1068653, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3789861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1068653, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3789861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1068659, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3809862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1068659, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3809862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1068659, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3809862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1068632, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.368986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1068632, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.368986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1068632, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.368986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1068618, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.361986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1068618, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.361986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1068618, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.361986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1068638, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.370986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1068638, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.370986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1068638, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.370986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1068625, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3659859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1068625, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3659859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1068625, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3659859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1068657, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.379986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1068657, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.379986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1068657, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.379986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1068646, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.376986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1068646, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.376986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1068646, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.376986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1068661, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3819861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1068661, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3819861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.183971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1068661, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3819861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1068629, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.367986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1068629, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.367986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1068629, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.367986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1068654, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3789861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1068654, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3789861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1068654, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3789861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1068621, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.364986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1068621, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.364986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1068621, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.364986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1068628, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3669858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1068628, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3669858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1068628, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3669858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1068651, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.376986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1068651, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.376986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1068651, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.376986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1068691, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3999865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1068691, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3999865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1068691, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3999865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1068685, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3919864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1068685, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3919864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1068685, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3919864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1068666, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3839862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1068666, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3839862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1068666, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3839862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1068713, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4069865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1068713, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4069865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1068713, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4069865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1068667, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3839862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1068667, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3839862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1068667, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3839862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1068708, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4029865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1068708, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4029865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1068708, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4029865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1068720, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4089866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1068720, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4089866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1068720, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4089866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1068699, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4009864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1068699, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4009864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.184987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1068699, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4009864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1068705, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4029865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1068705, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4029865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1068705, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4029865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1068668, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3849862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1068668, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3849862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1068668, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3849862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1068688, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3929863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1068688, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3929863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1068688, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3929863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1068726, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4099865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1068726, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4099865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1068726, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4099865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1068710, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4039865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1068710, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4039865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1068710, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4039865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1068671, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3869863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1068671, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3869863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1068671, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3869863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1068669, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3849862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1068669, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3849862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1068669, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3849862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1068672, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3869863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1068672, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3869863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1068672, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3869863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1068673, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3909862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1068673, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3909862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1068673, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3909862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1068689, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3929863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1068689, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3929863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1068689, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3929863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1068704, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4019864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1068704, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4019864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1068704, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4019864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1068690, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3939862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1068690, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3939862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1068729, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4109867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1068690, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.3939862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1068729, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4109867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1068729, 'dev': 115, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620254.4109867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:13:00.185508 | orchestrator | 2025-06-22 20:13:00.185519 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-22 20:13:00.185529 | orchestrator | Sunday 22 June 2025 20:11:40 +0000 (0:00:35.256) 0:00:47.922 *********** 2025-06-22 20:13:00.185539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.185549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.185559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:13:00.185569 | orchestrator | 2025-06-22 20:13:00.185579 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-22 20:13:00.185589 | orchestrator | Sunday 22 June 2025 20:11:41 +0000 (0:00:00.964) 0:00:48.887 *********** 2025-06-22 20:13:00.185598 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:13:00.185609 | orchestrator | 2025-06-22 20:13:00.185618 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-22 20:13:00.185633 | orchestrator | Sunday 22 June 2025 20:11:44 +0000 (0:00:02.304) 0:00:51.191 *********** 2025-06-22 20:13:00.185643 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:13:00.185653 | orchestrator | 2025-06-22 20:13:00.185662 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-22 20:13:00.185672 | orchestrator | Sunday 22 June 2025 20:11:46 +0000 (0:00:02.346) 0:00:53.537 *********** 2025-06-22 20:13:00.185681 | orchestrator | 2025-06-22 20:13:00.185691 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-22 20:13:00.185707 | orchestrator | Sunday 22 June 2025 20:11:46 +0000 (0:00:00.172) 0:00:53.710 *********** 2025-06-22 20:13:00.185716 | orchestrator | 2025-06-22 20:13:00.185726 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-22 20:13:00.185736 | orchestrator | Sunday 22 June 2025 20:11:46 +0000 (0:00:00.059) 0:00:53.769 *********** 2025-06-22 20:13:00.185745 | orchestrator | 2025-06-22 20:13:00.185755 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-22 20:13:00.185764 | orchestrator | Sunday 22 June 2025 20:11:46 +0000 (0:00:00.058) 0:00:53.828 *********** 2025-06-22 20:13:00.185774 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:00.185784 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:00.185793 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:13:00.185803 | orchestrator | 2025-06-22 20:13:00.185812 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-22 20:13:00.185822 | orchestrator | Sunday 22 June 2025 20:11:48 +0000 (0:00:01.891) 0:00:55.719 *********** 2025-06-22 20:13:00.185831 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:00.185841 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:00.185855 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-22 20:13:00.185865 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-22 20:13:00.185875 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-22 20:13:00.185884 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:13:00.185894 | orchestrator | 2025-06-22 20:13:00.185924 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-22 20:13:00.185935 | orchestrator | Sunday 22 June 2025 20:12:27 +0000 (0:00:38.858) 0:01:34.577 *********** 2025-06-22 20:13:00.185944 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:00.185954 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:13:00.185963 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:13:00.185973 | orchestrator | 2025-06-22 20:13:00.185983 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-22 20:13:00.185992 | orchestrator | Sunday 22 June 2025 20:12:54 +0000 (0:00:26.654) 0:02:01.232 *********** 2025-06-22 20:13:00.186002 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:13:00.186011 | orchestrator | 2025-06-22 20:13:00.186058 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-22 20:13:00.186069 | orchestrator | Sunday 22 June 2025 20:12:56 +0000 (0:00:02.428) 0:02:03.660 *********** 2025-06-22 20:13:00.186078 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:00.186088 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:00.186097 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:00.186107 | orchestrator | 2025-06-22 20:13:00.186117 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-22 20:13:00.186126 | orchestrator | Sunday 22 June 2025 20:12:56 +0000 (0:00:00.280) 0:02:03.941 *********** 2025-06-22 20:13:00.186138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-22 20:13:00.186150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-22 20:13:00.186161 | orchestrator | 2025-06-22 20:13:00.186171 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-22 20:13:00.186180 | orchestrator | Sunday 22 June 2025 20:12:59 +0000 (0:00:02.562) 0:02:06.504 *********** 2025-06-22 20:13:00.186196 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:00.186206 | orchestrator | 2025-06-22 20:13:00.186216 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:13:00.186227 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:13:00.186237 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:13:00.186247 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:13:00.186257 | orchestrator | 2025-06-22 20:13:00.186266 | orchestrator | 2025-06-22 20:13:00.186276 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:13:00.186286 | orchestrator | Sunday 22 June 2025 20:12:59 +0000 (0:00:00.233) 0:02:06.737 *********** 2025-06-22 20:13:00.186295 | orchestrator | =============================================================================== 2025-06-22 20:13:00.186311 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.86s 2025-06-22 20:13:00.186320 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 35.26s 2025-06-22 20:13:00.186330 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 26.65s 2025-06-22 20:13:00.186339 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.56s 2025-06-22 20:13:00.186349 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.43s 2025-06-22 20:13:00.186359 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.35s 2025-06-22 20:13:00.186368 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.30s 2025-06-22 20:13:00.186377 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.89s 2025-06-22 20:13:00.186387 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.30s 2025-06-22 20:13:00.186397 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.20s 2025-06-22 20:13:00.186406 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.16s 2025-06-22 20:13:00.186416 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.14s 2025-06-22 20:13:00.186425 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.08s 2025-06-22 20:13:00.186435 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.96s 2025-06-22 20:13:00.186449 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.79s 2025-06-22 20:13:00.186459 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.79s 2025-06-22 20:13:00.186468 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.73s 2025-06-22 20:13:00.186478 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.71s 2025-06-22 20:13:00.186488 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.66s 2025-06-22 20:13:00.186497 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.60s 2025-06-22 20:13:00.186507 | orchestrator | 2025-06-22 20:13:00 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:00.186517 | orchestrator | 2025-06-22 20:13:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:03.232699 | orchestrator | 2025-06-22 20:13:03 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:03.232804 | orchestrator | 2025-06-22 20:13:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:06.286185 | orchestrator | 2025-06-22 20:13:06 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:06.286289 | orchestrator | 2025-06-22 20:13:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:09.336776 | orchestrator | 2025-06-22 20:13:09 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:09.336878 | orchestrator | 2025-06-22 20:13:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:12.384740 | orchestrator | 2025-06-22 20:13:12 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:12.384840 | orchestrator | 2025-06-22 20:13:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:15.432523 | orchestrator | 2025-06-22 20:13:15 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:15.432613 | orchestrator | 2025-06-22 20:13:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:18.470979 | orchestrator | 2025-06-22 20:13:18 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:18.471082 | orchestrator | 2025-06-22 20:13:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:21.524661 | orchestrator | 2025-06-22 20:13:21 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:21.525165 | orchestrator | 2025-06-22 20:13:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:24.572602 | orchestrator | 2025-06-22 20:13:24 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:24.572691 | orchestrator | 2025-06-22 20:13:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:27.613211 | orchestrator | 2025-06-22 20:13:27 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:27.613313 | orchestrator | 2025-06-22 20:13:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:30.650996 | orchestrator | 2025-06-22 20:13:30 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:30.651093 | orchestrator | 2025-06-22 20:13:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:33.690226 | orchestrator | 2025-06-22 20:13:33 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:33.690326 | orchestrator | 2025-06-22 20:13:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:36.742012 | orchestrator | 2025-06-22 20:13:36 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:36.742199 | orchestrator | 2025-06-22 20:13:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:39.786726 | orchestrator | 2025-06-22 20:13:39 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:39.786841 | orchestrator | 2025-06-22 20:13:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:42.837829 | orchestrator | 2025-06-22 20:13:42 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:42.837961 | orchestrator | 2025-06-22 20:13:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:45.876417 | orchestrator | 2025-06-22 20:13:45 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:45.876539 | orchestrator | 2025-06-22 20:13:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:48.908703 | orchestrator | 2025-06-22 20:13:48 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:48.908805 | orchestrator | 2025-06-22 20:13:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:51.946396 | orchestrator | 2025-06-22 20:13:51 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:51.946498 | orchestrator | 2025-06-22 20:13:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:54.989460 | orchestrator | 2025-06-22 20:13:54 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:54.991171 | orchestrator | 2025-06-22 20:13:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:58.036977 | orchestrator | 2025-06-22 20:13:58 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:13:58.037064 | orchestrator | 2025-06-22 20:13:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:01.078013 | orchestrator | 2025-06-22 20:14:01 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:01.078186 | orchestrator | 2025-06-22 20:14:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:04.120813 | orchestrator | 2025-06-22 20:14:04 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:04.123433 | orchestrator | 2025-06-22 20:14:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:07.152638 | orchestrator | 2025-06-22 20:14:07 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:07.152735 | orchestrator | 2025-06-22 20:14:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:10.191842 | orchestrator | 2025-06-22 20:14:10 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:10.192017 | orchestrator | 2025-06-22 20:14:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:13.247584 | orchestrator | 2025-06-22 20:14:13 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:13.251817 | orchestrator | 2025-06-22 20:14:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:16.290704 | orchestrator | 2025-06-22 20:14:16 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:16.290802 | orchestrator | 2025-06-22 20:14:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:19.331198 | orchestrator | 2025-06-22 20:14:19 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:19.331295 | orchestrator | 2025-06-22 20:14:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:22.379523 | orchestrator | 2025-06-22 20:14:22 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:22.379622 | orchestrator | 2025-06-22 20:14:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:25.418979 | orchestrator | 2025-06-22 20:14:25 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:25.419075 | orchestrator | 2025-06-22 20:14:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:28.473052 | orchestrator | 2025-06-22 20:14:28 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:28.473147 | orchestrator | 2025-06-22 20:14:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:31.509234 | orchestrator | 2025-06-22 20:14:31 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:31.509331 | orchestrator | 2025-06-22 20:14:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:34.562452 | orchestrator | 2025-06-22 20:14:34 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:34.562546 | orchestrator | 2025-06-22 20:14:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:37.607516 | orchestrator | 2025-06-22 20:14:37 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:37.607615 | orchestrator | 2025-06-22 20:14:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:40.644023 | orchestrator | 2025-06-22 20:14:40 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:40.644122 | orchestrator | 2025-06-22 20:14:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:43.695663 | orchestrator | 2025-06-22 20:14:43 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:43.695775 | orchestrator | 2025-06-22 20:14:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:46.751722 | orchestrator | 2025-06-22 20:14:46 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:46.751807 | orchestrator | 2025-06-22 20:14:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:49.801139 | orchestrator | 2025-06-22 20:14:49 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:49.801233 | orchestrator | 2025-06-22 20:14:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:52.852086 | orchestrator | 2025-06-22 20:14:52 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:52.852185 | orchestrator | 2025-06-22 20:14:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:55.893371 | orchestrator | 2025-06-22 20:14:55 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:55.893470 | orchestrator | 2025-06-22 20:14:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:58.940813 | orchestrator | 2025-06-22 20:14:58 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:14:58.940957 | orchestrator | 2025-06-22 20:14:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:01.984064 | orchestrator | 2025-06-22 20:15:01 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:01.984165 | orchestrator | 2025-06-22 20:15:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:05.032614 | orchestrator | 2025-06-22 20:15:05 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:05.032731 | orchestrator | 2025-06-22 20:15:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:08.077701 | orchestrator | 2025-06-22 20:15:08 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:08.077800 | orchestrator | 2025-06-22 20:15:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:11.128986 | orchestrator | 2025-06-22 20:15:11 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:11.129044 | orchestrator | 2025-06-22 20:15:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:14.165052 | orchestrator | 2025-06-22 20:15:14 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:14.165137 | orchestrator | 2025-06-22 20:15:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:17.220997 | orchestrator | 2025-06-22 20:15:17 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:17.221106 | orchestrator | 2025-06-22 20:15:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:20.268743 | orchestrator | 2025-06-22 20:15:20 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:20.268924 | orchestrator | 2025-06-22 20:15:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:23.317899 | orchestrator | 2025-06-22 20:15:23 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:23.318078 | orchestrator | 2025-06-22 20:15:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:26.361695 | orchestrator | 2025-06-22 20:15:26 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:26.361824 | orchestrator | 2025-06-22 20:15:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:29.422142 | orchestrator | 2025-06-22 20:15:29 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:29.423327 | orchestrator | 2025-06-22 20:15:29 | INFO  | Task 315ad8d0-8f18-4856-befa-8d07337c84ac is in state STARTED 2025-06-22 20:15:29.424199 | orchestrator | 2025-06-22 20:15:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:32.478958 | orchestrator | 2025-06-22 20:15:32 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:32.481302 | orchestrator | 2025-06-22 20:15:32 | INFO  | Task 315ad8d0-8f18-4856-befa-8d07337c84ac is in state STARTED 2025-06-22 20:15:32.481392 | orchestrator | 2025-06-22 20:15:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:35.532917 | orchestrator | 2025-06-22 20:15:35 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:35.534209 | orchestrator | 2025-06-22 20:15:35 | INFO  | Task 315ad8d0-8f18-4856-befa-8d07337c84ac is in state STARTED 2025-06-22 20:15:35.534886 | orchestrator | 2025-06-22 20:15:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:38.582191 | orchestrator | 2025-06-22 20:15:38 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:38.584198 | orchestrator | 2025-06-22 20:15:38 | INFO  | Task 315ad8d0-8f18-4856-befa-8d07337c84ac is in state STARTED 2025-06-22 20:15:38.584513 | orchestrator | 2025-06-22 20:15:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:41.638868 | orchestrator | 2025-06-22 20:15:41 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:41.640195 | orchestrator | 2025-06-22 20:15:41 | INFO  | Task 315ad8d0-8f18-4856-befa-8d07337c84ac is in state STARTED 2025-06-22 20:15:41.640234 | orchestrator | 2025-06-22 20:15:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:44.700659 | orchestrator | 2025-06-22 20:15:44 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:44.702443 | orchestrator | 2025-06-22 20:15:44 | INFO  | Task 315ad8d0-8f18-4856-befa-8d07337c84ac is in state STARTED 2025-06-22 20:15:44.702612 | orchestrator | 2025-06-22 20:15:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:47.749591 | orchestrator | 2025-06-22 20:15:47 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:47.752964 | orchestrator | 2025-06-22 20:15:47 | INFO  | Task 315ad8d0-8f18-4856-befa-8d07337c84ac is in state SUCCESS 2025-06-22 20:15:47.753024 | orchestrator | 2025-06-22 20:15:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:50.799827 | orchestrator | 2025-06-22 20:15:50 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:50.799917 | orchestrator | 2025-06-22 20:15:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:53.841256 | orchestrator | 2025-06-22 20:15:53 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:53.841346 | orchestrator | 2025-06-22 20:15:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:56.886101 | orchestrator | 2025-06-22 20:15:56 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:56.886190 | orchestrator | 2025-06-22 20:15:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:59.931347 | orchestrator | 2025-06-22 20:15:59 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:15:59.931443 | orchestrator | 2025-06-22 20:15:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:02.974663 | orchestrator | 2025-06-22 20:16:02 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:02.974829 | orchestrator | 2025-06-22 20:16:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:06.026554 | orchestrator | 2025-06-22 20:16:06 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:06.026672 | orchestrator | 2025-06-22 20:16:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:09.065180 | orchestrator | 2025-06-22 20:16:09 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:09.065281 | orchestrator | 2025-06-22 20:16:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:12.105437 | orchestrator | 2025-06-22 20:16:12 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:12.105519 | orchestrator | 2025-06-22 20:16:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:15.159429 | orchestrator | 2025-06-22 20:16:15 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:15.159530 | orchestrator | 2025-06-22 20:16:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:18.200549 | orchestrator | 2025-06-22 20:16:18 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:18.200641 | orchestrator | 2025-06-22 20:16:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:21.242493 | orchestrator | 2025-06-22 20:16:21 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:21.242627 | orchestrator | 2025-06-22 20:16:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:24.286898 | orchestrator | 2025-06-22 20:16:24 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:24.286998 | orchestrator | 2025-06-22 20:16:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:27.335546 | orchestrator | 2025-06-22 20:16:27 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:27.335722 | orchestrator | 2025-06-22 20:16:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:30.382969 | orchestrator | 2025-06-22 20:16:30 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:30.383064 | orchestrator | 2025-06-22 20:16:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:33.422770 | orchestrator | 2025-06-22 20:16:33 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:33.422858 | orchestrator | 2025-06-22 20:16:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:36.451328 | orchestrator | 2025-06-22 20:16:36 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:36.451423 | orchestrator | 2025-06-22 20:16:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:39.483251 | orchestrator | 2025-06-22 20:16:39 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:39.483933 | orchestrator | 2025-06-22 20:16:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:42.523704 | orchestrator | 2025-06-22 20:16:42 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:42.523927 | orchestrator | 2025-06-22 20:16:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:45.565558 | orchestrator | 2025-06-22 20:16:45 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:45.565700 | orchestrator | 2025-06-22 20:16:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:48.613546 | orchestrator | 2025-06-22 20:16:48 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:48.613705 | orchestrator | 2025-06-22 20:16:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:51.654970 | orchestrator | 2025-06-22 20:16:51 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:51.655071 | orchestrator | 2025-06-22 20:16:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:54.701821 | orchestrator | 2025-06-22 20:16:54 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:54.701923 | orchestrator | 2025-06-22 20:16:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:57.749535 | orchestrator | 2025-06-22 20:16:57 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:16:57.749707 | orchestrator | 2025-06-22 20:16:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:00.798425 | orchestrator | 2025-06-22 20:17:00 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:17:00.798527 | orchestrator | 2025-06-22 20:17:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:03.844839 | orchestrator | 2025-06-22 20:17:03 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:17:03.844938 | orchestrator | 2025-06-22 20:17:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:06.892755 | orchestrator | 2025-06-22 20:17:06 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:17:06.892851 | orchestrator | 2025-06-22 20:17:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:09.935483 | orchestrator | 2025-06-22 20:17:09 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:17:09.935630 | orchestrator | 2025-06-22 20:17:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:12.982825 | orchestrator | 2025-06-22 20:17:12 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state STARTED 2025-06-22 20:17:12.982923 | orchestrator | 2025-06-22 20:17:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:16.025183 | orchestrator | 2025-06-22 20:17:16 | INFO  | Task a957110a-2fc3-44b6-b69d-1b881fd1523b is in state SUCCESS 2025-06-22 20:17:16.026849 | orchestrator | 2025-06-22 20:17:16.026893 | orchestrator | None 2025-06-22 20:17:16.026907 | orchestrator | 2025-06-22 20:17:16.026919 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:17:16.026930 | orchestrator | 2025-06-22 20:17:16.026942 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-22 20:17:16.026954 | orchestrator | Sunday 22 June 2025 20:08:58 +0000 (0:00:00.207) 0:00:00.207 *********** 2025-06-22 20:17:16.026965 | orchestrator | changed: [testbed-manager] 2025-06-22 20:17:16.026977 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.026988 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:17:16.026999 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:17:16.027010 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:17:16.027021 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:17:16.027032 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:17:16.027043 | orchestrator | 2025-06-22 20:17:16.027054 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:17:16.027163 | orchestrator | Sunday 22 June 2025 20:08:59 +0000 (0:00:00.936) 0:00:01.143 *********** 2025-06-22 20:17:16.027235 | orchestrator | changed: [testbed-manager] 2025-06-22 20:17:16.027249 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.027260 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:17:16.027270 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:17:16.027281 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:17:16.027292 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:17:16.027303 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:17:16.027313 | orchestrator | 2025-06-22 20:17:16.027338 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:17:16.027350 | orchestrator | Sunday 22 June 2025 20:09:00 +0000 (0:00:00.520) 0:00:01.663 *********** 2025-06-22 20:17:16.027361 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-22 20:17:16.027371 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-22 20:17:16.027382 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-22 20:17:16.027393 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-22 20:17:16.027403 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-22 20:17:16.027414 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-22 20:17:16.027425 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-22 20:17:16.027436 | orchestrator | 2025-06-22 20:17:16.027449 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-22 20:17:16.027461 | orchestrator | 2025-06-22 20:17:16.027474 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-22 20:17:16.027487 | orchestrator | Sunday 22 June 2025 20:09:01 +0000 (0:00:00.915) 0:00:02.579 *********** 2025-06-22 20:17:16.027499 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:17:16.027512 | orchestrator | 2025-06-22 20:17:16.027524 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-22 20:17:16.027537 | orchestrator | Sunday 22 June 2025 20:09:01 +0000 (0:00:00.591) 0:00:03.170 *********** 2025-06-22 20:17:16.027590 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-22 20:17:16.027628 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-22 20:17:16.027640 | orchestrator | 2025-06-22 20:17:16.027686 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-22 20:17:16.027701 | orchestrator | Sunday 22 June 2025 20:09:05 +0000 (0:00:04.247) 0:00:07.417 *********** 2025-06-22 20:17:16.027714 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:17:16.027726 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:17:16.027739 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.027751 | orchestrator | 2025-06-22 20:17:16.027764 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-22 20:17:16.027777 | orchestrator | Sunday 22 June 2025 20:09:10 +0000 (0:00:04.370) 0:00:11.787 *********** 2025-06-22 20:17:16.027790 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.027802 | orchestrator | 2025-06-22 20:17:16.027813 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-22 20:17:16.027824 | orchestrator | Sunday 22 June 2025 20:09:10 +0000 (0:00:00.624) 0:00:12.412 *********** 2025-06-22 20:17:16.027835 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.027846 | orchestrator | 2025-06-22 20:17:16.027856 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-22 20:17:16.027867 | orchestrator | Sunday 22 June 2025 20:09:12 +0000 (0:00:01.253) 0:00:13.666 *********** 2025-06-22 20:17:16.027878 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.027888 | orchestrator | 2025-06-22 20:17:16.027900 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 20:17:16.027911 | orchestrator | Sunday 22 June 2025 20:09:14 +0000 (0:00:02.312) 0:00:15.978 *********** 2025-06-22 20:17:16.027922 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.027933 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.027954 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.027965 | orchestrator | 2025-06-22 20:17:16.027976 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-22 20:17:16.027987 | orchestrator | Sunday 22 June 2025 20:09:14 +0000 (0:00:00.274) 0:00:16.252 *********** 2025-06-22 20:17:16.027998 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:17:16.028009 | orchestrator | 2025-06-22 20:17:16.028020 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-22 20:17:16.028030 | orchestrator | Sunday 22 June 2025 20:09:46 +0000 (0:00:31.486) 0:00:47.739 *********** 2025-06-22 20:17:16.028041 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.028052 | orchestrator | 2025-06-22 20:17:16.028063 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-22 20:17:16.028073 | orchestrator | Sunday 22 June 2025 20:09:59 +0000 (0:00:13.654) 0:01:01.393 *********** 2025-06-22 20:17:16.028084 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:17:16.028095 | orchestrator | 2025-06-22 20:17:16.028106 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-22 20:17:16.028117 | orchestrator | Sunday 22 June 2025 20:10:10 +0000 (0:00:10.821) 0:01:12.215 *********** 2025-06-22 20:17:16.028213 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:17:16.028226 | orchestrator | 2025-06-22 20:17:16.028238 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-22 20:17:16.028249 | orchestrator | Sunday 22 June 2025 20:10:11 +0000 (0:00:00.915) 0:01:13.130 *********** 2025-06-22 20:17:16.028260 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.028281 | orchestrator | 2025-06-22 20:17:16.028293 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 20:17:16.028303 | orchestrator | Sunday 22 June 2025 20:10:12 +0000 (0:00:00.430) 0:01:13.560 *********** 2025-06-22 20:17:16.028314 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:17:16.028353 | orchestrator | 2025-06-22 20:17:16.028365 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-22 20:17:16.028376 | orchestrator | Sunday 22 June 2025 20:10:12 +0000 (0:00:00.521) 0:01:14.082 *********** 2025-06-22 20:17:16.028386 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:17:16.028397 | orchestrator | 2025-06-22 20:17:16.028408 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-22 20:17:16.028419 | orchestrator | Sunday 22 June 2025 20:10:30 +0000 (0:00:18.094) 0:01:32.176 *********** 2025-06-22 20:17:16.028429 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.028440 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.028457 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.028468 | orchestrator | 2025-06-22 20:17:16.028479 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-22 20:17:16.028490 | orchestrator | 2025-06-22 20:17:16.028501 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-22 20:17:16.028511 | orchestrator | Sunday 22 June 2025 20:10:30 +0000 (0:00:00.307) 0:01:32.484 *********** 2025-06-22 20:17:16.028522 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:17:16.028533 | orchestrator | 2025-06-22 20:17:16.028544 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-22 20:17:16.028554 | orchestrator | Sunday 22 June 2025 20:10:31 +0000 (0:00:00.521) 0:01:33.005 *********** 2025-06-22 20:17:16.028565 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.028576 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.028586 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.028636 | orchestrator | 2025-06-22 20:17:16.028648 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-22 20:17:16.028659 | orchestrator | Sunday 22 June 2025 20:10:33 +0000 (0:00:02.180) 0:01:35.185 *********** 2025-06-22 20:17:16.028670 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.028690 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.028701 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.028712 | orchestrator | 2025-06-22 20:17:16.028722 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-22 20:17:16.028733 | orchestrator | Sunday 22 June 2025 20:10:35 +0000 (0:00:02.263) 0:01:37.449 *********** 2025-06-22 20:17:16.028744 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.028755 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.028766 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.028777 | orchestrator | 2025-06-22 20:17:16.028787 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-22 20:17:16.028798 | orchestrator | Sunday 22 June 2025 20:10:36 +0000 (0:00:00.348) 0:01:37.797 *********** 2025-06-22 20:17:16.028809 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 20:17:16.028820 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.028831 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 20:17:16.028842 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.028852 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-22 20:17:16.028863 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-22 20:17:16.028874 | orchestrator | 2025-06-22 20:17:16.028885 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-22 20:17:16.028896 | orchestrator | Sunday 22 June 2025 20:10:45 +0000 (0:00:09.466) 0:01:47.264 *********** 2025-06-22 20:17:16.028906 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.028917 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.028928 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.028939 | orchestrator | 2025-06-22 20:17:16.028950 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-22 20:17:16.028960 | orchestrator | Sunday 22 June 2025 20:10:46 +0000 (0:00:00.368) 0:01:47.632 *********** 2025-06-22 20:17:16.028971 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-22 20:17:16.028982 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.028993 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 20:17:16.029004 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.029015 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 20:17:16.029025 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.029036 | orchestrator | 2025-06-22 20:17:16.029047 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-22 20:17:16.029058 | orchestrator | Sunday 22 June 2025 20:10:46 +0000 (0:00:00.590) 0:01:48.223 *********** 2025-06-22 20:17:16.029069 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.029080 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.029091 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.029101 | orchestrator | 2025-06-22 20:17:16.029112 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-22 20:17:16.029123 | orchestrator | Sunday 22 June 2025 20:10:47 +0000 (0:00:00.449) 0:01:48.673 *********** 2025-06-22 20:17:16.029134 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.029144 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.029155 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.029166 | orchestrator | 2025-06-22 20:17:16.029177 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-22 20:17:16.029188 | orchestrator | Sunday 22 June 2025 20:10:48 +0000 (0:00:00.903) 0:01:49.576 *********** 2025-06-22 20:17:16.029199 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.029217 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.029228 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.029239 | orchestrator | 2025-06-22 20:17:16.029250 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-22 20:17:16.029260 | orchestrator | Sunday 22 June 2025 20:10:50 +0000 (0:00:02.055) 0:01:51.632 *********** 2025-06-22 20:17:16.029271 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.029288 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.029299 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:17:16.029310 | orchestrator | 2025-06-22 20:17:16.029321 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-22 20:17:16.029332 | orchestrator | Sunday 22 June 2025 20:11:09 +0000 (0:00:19.786) 0:02:11.418 *********** 2025-06-22 20:17:16.029342 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.029353 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.029364 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:17:16.029375 | orchestrator | 2025-06-22 20:17:16.029385 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-22 20:17:16.029396 | orchestrator | Sunday 22 June 2025 20:11:21 +0000 (0:00:11.164) 0:02:22.583 *********** 2025-06-22 20:17:16.029407 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:17:16.029418 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.029428 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.029439 | orchestrator | 2025-06-22 20:17:16.029450 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-22 20:17:16.029466 | orchestrator | Sunday 22 June 2025 20:11:21 +0000 (0:00:00.897) 0:02:23.481 *********** 2025-06-22 20:17:16.029477 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.029488 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.029499 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.029510 | orchestrator | 2025-06-22 20:17:16.029520 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-22 20:17:16.029531 | orchestrator | Sunday 22 June 2025 20:11:32 +0000 (0:00:10.999) 0:02:34.481 *********** 2025-06-22 20:17:16.029542 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.029553 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.029564 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.029574 | orchestrator | 2025-06-22 20:17:16.029585 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-22 20:17:16.029659 | orchestrator | Sunday 22 June 2025 20:11:34 +0000 (0:00:01.639) 0:02:36.120 *********** 2025-06-22 20:17:16.029672 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.029683 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.029693 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.029704 | orchestrator | 2025-06-22 20:17:16.029715 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-22 20:17:16.029726 | orchestrator | 2025-06-22 20:17:16.029737 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 20:17:16.029748 | orchestrator | Sunday 22 June 2025 20:11:34 +0000 (0:00:00.334) 0:02:36.455 *********** 2025-06-22 20:17:16.029759 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:17:16.029772 | orchestrator | 2025-06-22 20:17:16.029783 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-22 20:17:16.029794 | orchestrator | Sunday 22 June 2025 20:11:35 +0000 (0:00:00.568) 0:02:37.024 *********** 2025-06-22 20:17:16.029805 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-22 20:17:16.029816 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-22 20:17:16.029827 | orchestrator | 2025-06-22 20:17:16.029837 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-22 20:17:16.029846 | orchestrator | Sunday 22 June 2025 20:11:38 +0000 (0:00:03.151) 0:02:40.175 *********** 2025-06-22 20:17:16.029856 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-22 20:17:16.029868 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-22 20:17:16.029877 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-22 20:17:16.029894 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-22 20:17:16.029904 | orchestrator | 2025-06-22 20:17:16.029913 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-22 20:17:16.029923 | orchestrator | Sunday 22 June 2025 20:11:45 +0000 (0:00:06.998) 0:02:47.174 *********** 2025-06-22 20:17:16.029933 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:17:16.029942 | orchestrator | 2025-06-22 20:17:16.029952 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-22 20:17:16.029962 | orchestrator | Sunday 22 June 2025 20:11:49 +0000 (0:00:03.749) 0:02:50.923 *********** 2025-06-22 20:17:16.029972 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:17:16.029982 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-22 20:17:16.029991 | orchestrator | 2025-06-22 20:17:16.030001 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-22 20:17:16.030011 | orchestrator | Sunday 22 June 2025 20:11:53 +0000 (0:00:04.178) 0:02:55.102 *********** 2025-06-22 20:17:16.030075 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:17:16.030085 | orchestrator | 2025-06-22 20:17:16.030095 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-22 20:17:16.030105 | orchestrator | Sunday 22 June 2025 20:11:57 +0000 (0:00:03.518) 0:02:58.621 *********** 2025-06-22 20:17:16.030115 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-22 20:17:16.030124 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-22 20:17:16.030134 | orchestrator | 2025-06-22 20:17:16.030144 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-22 20:17:16.030161 | orchestrator | Sunday 22 June 2025 20:12:06 +0000 (0:00:09.159) 0:03:07.780 *********** 2025-06-22 20:17:16.030182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.030198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.030217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.030236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.030249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.030263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.030274 | orchestrator | 2025-06-22 20:17:16.030284 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-22 20:17:16.030294 | orchestrator | Sunday 22 June 2025 20:12:07 +0000 (0:00:01.304) 0:03:09.084 *********** 2025-06-22 20:17:16.030304 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.030314 | orchestrator | 2025-06-22 20:17:16.030323 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-22 20:17:16.030333 | orchestrator | Sunday 22 June 2025 20:12:07 +0000 (0:00:00.135) 0:03:09.220 *********** 2025-06-22 20:17:16.030349 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.030359 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.030369 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.030378 | orchestrator | 2025-06-22 20:17:16.030388 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-22 20:17:16.030398 | orchestrator | Sunday 22 June 2025 20:12:08 +0000 (0:00:00.503) 0:03:09.724 *********** 2025-06-22 20:17:16.030408 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:17:16.030418 | orchestrator | 2025-06-22 20:17:16.030427 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-22 20:17:16.030437 | orchestrator | Sunday 22 June 2025 20:12:08 +0000 (0:00:00.610) 0:03:10.334 *********** 2025-06-22 20:17:16.030446 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.030456 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.030466 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.030475 | orchestrator | 2025-06-22 20:17:16.030485 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 20:17:16.030494 | orchestrator | Sunday 22 June 2025 20:12:09 +0000 (0:00:00.296) 0:03:10.630 *********** 2025-06-22 20:17:16.030504 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:17:16.030514 | orchestrator | 2025-06-22 20:17:16.030524 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-22 20:17:16.030533 | orchestrator | Sunday 22 June 2025 20:12:09 +0000 (0:00:00.667) 0:03:11.298 *********** 2025-06-22 20:17:16.030550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.030566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.030608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.030621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.030632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.030651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.030662 | orchestrator | 2025-06-22 20:17:16.030672 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-22 20:17:16.030682 | orchestrator | Sunday 22 June 2025 20:12:12 +0000 (0:00:02.290) 0:03:13.588 *********** 2025-06-22 20:17:16.030697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:17:16.030714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.030725 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.030735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:17:16.030778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.030789 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.030804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:17:16.030820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.030831 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.030840 | orchestrator | 2025-06-22 20:17:16.030850 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-22 20:17:16.030860 | orchestrator | Sunday 22 June 2025 20:12:12 +0000 (0:00:00.493) 0:03:14.082 *********** 2025-06-22 20:17:16.030870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:17:16.030881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.030891 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.030912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:17:16.030930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.030940 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.030950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:17:16.030961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.030971 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.030981 | orchestrator | 2025-06-22 20:17:16.030990 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-22 20:17:16.031000 | orchestrator | Sunday 22 June 2025 20:12:13 +0000 (0:00:00.985) 0:03:15.067 *********** 2025-06-22 20:17:16.031104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.031129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.031141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.031158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.031169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.031190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.031200 | orchestrator | 2025-06-22 20:17:16.031210 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-22 20:17:16.031220 | orchestrator | Sunday 22 June 2025 20:12:15 +0000 (0:00:02.280) 0:03:17.348 *********** 2025-06-22 20:17:16.031231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.031242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.031260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.031278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.031431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.031447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.031458 | orchestrator | 2025-06-22 20:17:16.031468 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-22 20:17:16.031478 | orchestrator | Sunday 22 June 2025 20:12:20 +0000 (0:00:05.053) 0:03:22.402 *********** 2025-06-22 20:17:16.031496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:17:16.031514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.031524 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.031539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:17:16.031550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.031560 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.031570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:17:16.031614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.031626 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.031636 | orchestrator | 2025-06-22 20:17:16.031646 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-22 20:17:16.031655 | orchestrator | Sunday 22 June 2025 20:12:21 +0000 (0:00:00.545) 0:03:22.947 *********** 2025-06-22 20:17:16.031665 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.031675 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:17:16.031685 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:17:16.031694 | orchestrator | 2025-06-22 20:17:16.031704 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-22 20:17:16.031714 | orchestrator | Sunday 22 June 2025 20:12:23 +0000 (0:00:01.935) 0:03:24.883 *********** 2025-06-22 20:17:16.031723 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.031733 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.031747 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.031756 | orchestrator | 2025-06-22 20:17:16.031766 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-22 20:17:16.031776 | orchestrator | Sunday 22 June 2025 20:12:23 +0000 (0:00:00.373) 0:03:25.257 *********** 2025-06-22 20:17:16.031786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.031797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.031843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:17:16.031859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.031881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.031892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.031902 | orchestrator | 2025-06-22 20:17:16.031912 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-22 20:17:16.031970 | orchestrator | Sunday 22 June 2025 20:12:25 +0000 (0:00:01.884) 0:03:27.142 *********** 2025-06-22 20:17:16.032005 | orchestrator | 2025-06-22 20:17:16.032015 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-22 20:17:16.032024 | orchestrator | Sunday 22 June 2025 20:12:25 +0000 (0:00:00.122) 0:03:27.264 *********** 2025-06-22 20:17:16.032040 | orchestrator | 2025-06-22 20:17:16.032050 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-22 20:17:16.032060 | orchestrator | Sunday 22 June 2025 20:12:25 +0000 (0:00:00.119) 0:03:27.384 *********** 2025-06-22 20:17:16.032070 | orchestrator | 2025-06-22 20:17:16.032079 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-22 20:17:16.032089 | orchestrator | Sunday 22 June 2025 20:12:26 +0000 (0:00:00.283) 0:03:27.668 *********** 2025-06-22 20:17:16.032121 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.032133 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:17:16.032143 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:17:16.032153 | orchestrator | 2025-06-22 20:17:16.032162 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-22 20:17:16.032172 | orchestrator | Sunday 22 June 2025 20:12:50 +0000 (0:00:24.259) 0:03:51.928 *********** 2025-06-22 20:17:16.032182 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.032191 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:17:16.032201 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:17:16.032210 | orchestrator | 2025-06-22 20:17:16.032220 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-22 20:17:16.032230 | orchestrator | 2025-06-22 20:17:16.032239 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:17:16.032249 | orchestrator | Sunday 22 June 2025 20:13:01 +0000 (0:00:10.948) 0:04:02.877 *********** 2025-06-22 20:17:16.032259 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:17:16.032269 | orchestrator | 2025-06-22 20:17:16.032284 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:17:16.032294 | orchestrator | Sunday 22 June 2025 20:13:02 +0000 (0:00:01.156) 0:04:04.033 *********** 2025-06-22 20:17:16.032304 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.032314 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.032323 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.032333 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.032342 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.032352 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.032361 | orchestrator | 2025-06-22 20:17:16.032371 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-22 20:17:16.032381 | orchestrator | Sunday 22 June 2025 20:13:03 +0000 (0:00:00.743) 0:04:04.777 *********** 2025-06-22 20:17:16.032391 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.032400 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.032409 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.032419 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:17:16.032428 | orchestrator | 2025-06-22 20:17:16.032438 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-22 20:17:16.032448 | orchestrator | Sunday 22 June 2025 20:13:04 +0000 (0:00:00.967) 0:04:05.744 *********** 2025-06-22 20:17:16.032458 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-22 20:17:16.032467 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-22 20:17:16.032482 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-22 20:17:16.032491 | orchestrator | 2025-06-22 20:17:16.032501 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-22 20:17:16.032511 | orchestrator | Sunday 22 June 2025 20:13:04 +0000 (0:00:00.670) 0:04:06.415 *********** 2025-06-22 20:17:16.032520 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-22 20:17:16.032530 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-22 20:17:16.032539 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-22 20:17:16.032549 | orchestrator | 2025-06-22 20:17:16.032559 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-22 20:17:16.032578 | orchestrator | Sunday 22 June 2025 20:13:06 +0000 (0:00:01.250) 0:04:07.666 *********** 2025-06-22 20:17:16.032588 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-22 20:17:16.032622 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.032632 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-22 20:17:16.032642 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.032651 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-22 20:17:16.032661 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.032671 | orchestrator | 2025-06-22 20:17:16.032680 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-22 20:17:16.032690 | orchestrator | Sunday 22 June 2025 20:13:06 +0000 (0:00:00.685) 0:04:08.351 *********** 2025-06-22 20:17:16.032700 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 20:17:16.032710 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 20:17:16.032719 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.032729 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 20:17:16.032739 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 20:17:16.032749 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.032758 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-22 20:17:16.032768 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 20:17:16.032778 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-22 20:17:16.032787 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 20:17:16.032797 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.032807 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-22 20:17:16.032816 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-22 20:17:16.032826 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-22 20:17:16.032836 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-22 20:17:16.032845 | orchestrator | 2025-06-22 20:17:16.032855 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-22 20:17:16.032865 | orchestrator | Sunday 22 June 2025 20:13:08 +0000 (0:00:02.037) 0:04:10.389 *********** 2025-06-22 20:17:16.032874 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.032884 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.032894 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.032903 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:17:16.032913 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:17:16.032923 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:17:16.032933 | orchestrator | 2025-06-22 20:17:16.032943 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-22 20:17:16.032952 | orchestrator | Sunday 22 June 2025 20:13:10 +0000 (0:00:01.182) 0:04:11.571 *********** 2025-06-22 20:17:16.032962 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.032972 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.032982 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.032991 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:17:16.033001 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:17:16.033011 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:17:16.033020 | orchestrator | 2025-06-22 20:17:16.033030 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-22 20:17:16.033040 | orchestrator | Sunday 22 June 2025 20:13:11 +0000 (0:00:01.466) 0:04:13.038 *********** 2025-06-22 20:17:16.033058 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033081 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033113 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033151 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033210 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033261 | orchestrator | 2025-06-22 20:17:16.033271 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:17:16.033281 | orchestrator | Sunday 22 June 2025 20:13:14 +0000 (0:00:02.534) 0:04:15.572 *********** 2025-06-22 20:17:16.033291 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:17:16.033301 | orchestrator | 2025-06-22 20:17:16.033311 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-22 20:17:16.033321 | orchestrator | Sunday 22 June 2025 20:13:15 +0000 (0:00:01.255) 0:04:16.828 *********** 2025-06-22 20:17:16.033331 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033411 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033437 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033447 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033492 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033514 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033529 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.033539 | orchestrator | 2025-06-22 20:17:16.033549 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-22 20:17:16.033559 | orchestrator | Sunday 22 June 2025 20:13:18 +0000 (0:00:03.057) 0:04:19.886 *********** 2025-06-22 20:17:16.033570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:17:16.033580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:17:16.033590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.033650 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.033668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:17:16.033679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:17:16.033694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.033705 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.033715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:17:16.033725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:17:16.033741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.033751 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.033767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:17:16.033782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.033793 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.033803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:17:16.033813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.033824 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.033840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:17:16.033850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.033860 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.033870 | orchestrator | 2025-06-22 20:17:16.033880 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-22 20:17:16.033889 | orchestrator | Sunday 22 June 2025 20:13:20 +0000 (0:00:01.760) 0:04:21.646 *********** 2025-06-22 20:17:16.034362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:17:16.034386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:17:16.034395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.034404 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.034413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:17:16.034428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:17:16.034443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.034453 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.034465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:17:16.034474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:17:16.034482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.034496 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.034504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:17:16.034513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:17:16.034527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.034540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.034549 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.034557 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.034566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:17:16.034579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.034588 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.034611 | orchestrator | 2025-06-22 20:17:16.034620 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:17:16.034629 | orchestrator | Sunday 22 June 2025 20:13:22 +0000 (0:00:01.915) 0:04:23.562 *********** 2025-06-22 20:17:16.034637 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.034645 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.034653 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.034661 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:17:16.034669 | orchestrator | 2025-06-22 20:17:16.034677 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-22 20:17:16.034686 | orchestrator | Sunday 22 June 2025 20:13:22 +0000 (0:00:00.822) 0:04:24.385 *********** 2025-06-22 20:17:16.034694 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 20:17:16.034702 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 20:17:16.034710 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 20:17:16.034718 | orchestrator | 2025-06-22 20:17:16.034727 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-22 20:17:16.034735 | orchestrator | Sunday 22 June 2025 20:13:23 +0000 (0:00:01.135) 0:04:25.520 *********** 2025-06-22 20:17:16.034743 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 20:17:16.034751 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 20:17:16.034759 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 20:17:16.034767 | orchestrator | 2025-06-22 20:17:16.034775 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-22 20:17:16.034783 | orchestrator | Sunday 22 June 2025 20:13:24 +0000 (0:00:00.898) 0:04:26.419 *********** 2025-06-22 20:17:16.034792 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:17:16.034800 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:17:16.034809 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:17:16.034817 | orchestrator | 2025-06-22 20:17:16.034825 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-22 20:17:16.034833 | orchestrator | Sunday 22 June 2025 20:13:25 +0000 (0:00:00.495) 0:04:26.915 *********** 2025-06-22 20:17:16.034841 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:17:16.034849 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:17:16.034857 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:17:16.034865 | orchestrator | 2025-06-22 20:17:16.034873 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-22 20:17:16.034881 | orchestrator | Sunday 22 June 2025 20:13:25 +0000 (0:00:00.523) 0:04:27.439 *********** 2025-06-22 20:17:16.034890 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-22 20:17:16.034903 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-22 20:17:16.034912 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-22 20:17:16.034920 | orchestrator | 2025-06-22 20:17:16.034929 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-22 20:17:16.034937 | orchestrator | Sunday 22 June 2025 20:13:27 +0000 (0:00:01.314) 0:04:28.753 *********** 2025-06-22 20:17:16.034946 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-22 20:17:16.034954 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-22 20:17:16.034962 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-22 20:17:16.034979 | orchestrator | 2025-06-22 20:17:16.034988 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-22 20:17:16.034997 | orchestrator | Sunday 22 June 2025 20:13:28 +0000 (0:00:01.119) 0:04:29.872 *********** 2025-06-22 20:17:16.035005 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-22 20:17:16.035013 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-22 20:17:16.035022 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-22 20:17:16.035030 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-22 20:17:16.035038 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-22 20:17:16.035051 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-22 20:17:16.035059 | orchestrator | 2025-06-22 20:17:16.035068 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-22 20:17:16.035076 | orchestrator | Sunday 22 June 2025 20:13:32 +0000 (0:00:03.722) 0:04:33.595 *********** 2025-06-22 20:17:16.035085 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.035093 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.035101 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.035109 | orchestrator | 2025-06-22 20:17:16.035118 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-22 20:17:16.035126 | orchestrator | Sunday 22 June 2025 20:13:32 +0000 (0:00:00.306) 0:04:33.902 *********** 2025-06-22 20:17:16.035134 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.035142 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.035151 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.035159 | orchestrator | 2025-06-22 20:17:16.035167 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-22 20:17:16.035175 | orchestrator | Sunday 22 June 2025 20:13:32 +0000 (0:00:00.269) 0:04:34.171 *********** 2025-06-22 20:17:16.035184 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:17:16.035192 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:17:16.035200 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:17:16.035208 | orchestrator | 2025-06-22 20:17:16.035217 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-22 20:17:16.035225 | orchestrator | Sunday 22 June 2025 20:13:34 +0000 (0:00:01.436) 0:04:35.608 *********** 2025-06-22 20:17:16.035234 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-22 20:17:16.035243 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-22 20:17:16.035252 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-22 20:17:16.035260 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-22 20:17:16.035268 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-22 20:17:16.035276 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-22 20:17:16.035285 | orchestrator | 2025-06-22 20:17:16.035293 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-22 20:17:16.035301 | orchestrator | Sunday 22 June 2025 20:13:37 +0000 (0:00:03.135) 0:04:38.743 *********** 2025-06-22 20:17:16.035310 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:17:16.035318 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:17:16.035326 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:17:16.035335 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:17:16.035343 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:17:16.035356 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:17:16.035365 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:17:16.035373 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:17:16.035381 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:17:16.035390 | orchestrator | 2025-06-22 20:17:16.035398 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-22 20:17:16.035406 | orchestrator | Sunday 22 June 2025 20:13:40 +0000 (0:00:03.289) 0:04:42.033 *********** 2025-06-22 20:17:16.035415 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.035423 | orchestrator | 2025-06-22 20:17:16.035431 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-22 20:17:16.035440 | orchestrator | Sunday 22 June 2025 20:13:40 +0000 (0:00:00.128) 0:04:42.162 *********** 2025-06-22 20:17:16.035448 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.035456 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.035464 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.035472 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.035481 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.035489 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.035497 | orchestrator | 2025-06-22 20:17:16.035506 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-22 20:17:16.035518 | orchestrator | Sunday 22 June 2025 20:13:41 +0000 (0:00:00.739) 0:04:42.901 *********** 2025-06-22 20:17:16.035527 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 20:17:16.035535 | orchestrator | 2025-06-22 20:17:16.035544 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-22 20:17:16.035552 | orchestrator | Sunday 22 June 2025 20:13:42 +0000 (0:00:00.697) 0:04:43.599 *********** 2025-06-22 20:17:16.035560 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.035569 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.035577 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.035585 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.035607 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.035615 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.035623 | orchestrator | 2025-06-22 20:17:16.035632 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-22 20:17:16.035640 | orchestrator | Sunday 22 June 2025 20:13:42 +0000 (0:00:00.584) 0:04:44.184 *********** 2025-06-22 20:17:16.035652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035677 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035721 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035810 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035819 | orchestrator | 2025-06-22 20:17:16.035827 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-22 20:17:16.035836 | orchestrator | Sunday 22 June 2025 20:13:46 +0000 (0:00:04.071) 0:04:48.256 *********** 2025-06-22 20:17:16.035844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:17:16.035857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:17:16.035870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:17:16.035879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:17:16.035892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:17:16.035901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:17:16.035914 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035927 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.035989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.036001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.036015 | orchestrator | 2025-06-22 20:17:16.036023 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-22 20:17:16.036032 | orchestrator | Sunday 22 June 2025 20:13:52 +0000 (0:00:05.984) 0:04:54.240 *********** 2025-06-22 20:17:16.036040 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.036048 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.036056 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.036064 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.036073 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.036081 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.036089 | orchestrator | 2025-06-22 20:17:16.036097 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-22 20:17:16.036105 | orchestrator | Sunday 22 June 2025 20:13:54 +0000 (0:00:01.489) 0:04:55.729 *********** 2025-06-22 20:17:16.036113 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-22 20:17:16.036121 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-22 20:17:16.036130 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-22 20:17:16.036138 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-22 20:17:16.036146 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-22 20:17:16.036154 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-22 20:17:16.036162 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-22 20:17:16.036170 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.036178 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-22 20:17:16.036186 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.036194 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-22 20:17:16.036202 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.036210 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-22 20:17:16.036218 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-22 20:17:16.036227 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-22 20:17:16.036235 | orchestrator | 2025-06-22 20:17:16.036243 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-22 20:17:16.036251 | orchestrator | Sunday 22 June 2025 20:13:57 +0000 (0:00:03.673) 0:04:59.403 *********** 2025-06-22 20:17:16.036259 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.036267 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.036275 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.036283 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.036291 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.036299 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.036307 | orchestrator | 2025-06-22 20:17:16.036316 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-22 20:17:16.036324 | orchestrator | Sunday 22 June 2025 20:13:58 +0000 (0:00:00.803) 0:05:00.207 *********** 2025-06-22 20:17:16.036332 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-22 20:17:16.036340 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-22 20:17:16.036353 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-22 20:17:16.036362 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-22 20:17:16.036377 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-22 20:17:16.036385 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-22 20:17:16.036393 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-22 20:17:16.036401 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-22 20:17:16.036409 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-22 20:17:16.036418 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-22 20:17:16.036426 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.036438 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-22 20:17:16.036446 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.036454 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-22 20:17:16.036462 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.036470 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:17:16.036478 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:17:16.036486 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:17:16.036494 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:17:16.036502 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:17:16.036510 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:17:16.036518 | orchestrator | 2025-06-22 20:17:16.036526 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-22 20:17:16.036534 | orchestrator | Sunday 22 June 2025 20:14:04 +0000 (0:00:05.382) 0:05:05.589 *********** 2025-06-22 20:17:16.036543 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 20:17:16.036551 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 20:17:16.036559 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 20:17:16.036567 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-22 20:17:16.036575 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:17:16.036583 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:17:16.036633 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-22 20:17:16.036643 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-22 20:17:16.036651 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:17:16.036659 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 20:17:16.036667 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 20:17:16.036675 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 20:17:16.036689 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-22 20:17:16.036697 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.036705 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:17:16.036713 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-22 20:17:16.036721 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.036729 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-22 20:17:16.036738 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.036746 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:17:16.036754 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:17:16.036762 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:17:16.036770 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:17:16.036782 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:17:16.036791 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:17:16.036799 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:17:16.036807 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:17:16.036815 | orchestrator | 2025-06-22 20:17:16.036824 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-22 20:17:16.036832 | orchestrator | Sunday 22 June 2025 20:14:10 +0000 (0:00:06.836) 0:05:12.426 *********** 2025-06-22 20:17:16.036840 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.036848 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.036856 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.036864 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.036872 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.036880 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.036889 | orchestrator | 2025-06-22 20:17:16.036897 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-22 20:17:16.036905 | orchestrator | Sunday 22 June 2025 20:14:11 +0000 (0:00:00.693) 0:05:13.119 *********** 2025-06-22 20:17:16.036913 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.036925 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.036933 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.036941 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.036950 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.036958 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.036966 | orchestrator | 2025-06-22 20:17:16.036974 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-22 20:17:16.036982 | orchestrator | Sunday 22 June 2025 20:14:12 +0000 (0:00:00.831) 0:05:13.951 *********** 2025-06-22 20:17:16.036990 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.036998 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.037006 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:17:16.037014 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.037022 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:17:16.037029 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:17:16.037036 | orchestrator | 2025-06-22 20:17:16.037043 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-22 20:17:16.037050 | orchestrator | Sunday 22 June 2025 20:14:14 +0000 (0:00:02.008) 0:05:15.960 *********** 2025-06-22 20:17:16.037057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:17:16.037069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:17:16.037076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.037084 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.037095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:17:16.037106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:17:16.037113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:17:16.037125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.037132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:17:16.037140 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.037152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.037159 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.037170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:17:16.037177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.037189 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.037196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:17:16.037204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.037211 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.037218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:17:16.037229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:17:16.037237 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.037244 | orchestrator | 2025-06-22 20:17:16.037251 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-22 20:17:16.037258 | orchestrator | Sunday 22 June 2025 20:14:16 +0000 (0:00:01.793) 0:05:17.753 *********** 2025-06-22 20:17:16.037265 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-22 20:17:16.037272 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-22 20:17:16.037279 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.037286 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-22 20:17:16.037293 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-22 20:17:16.037300 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.037307 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-22 20:17:16.037314 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-22 20:17:16.037321 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.037327 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-22 20:17:16.037334 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-22 20:17:16.037349 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.037357 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-22 20:17:16.037363 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-22 20:17:16.037370 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.037377 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-22 20:17:16.037384 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-22 20:17:16.037391 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.037397 | orchestrator | 2025-06-22 20:17:16.037404 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-22 20:17:16.037411 | orchestrator | Sunday 22 June 2025 20:14:16 +0000 (0:00:00.638) 0:05:18.392 *********** 2025-06-22 20:17:16.037418 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037437 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037488 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037560 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037574 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:17:16.037582 | orchestrator | 2025-06-22 20:17:16.037589 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:17:16.037610 | orchestrator | Sunday 22 June 2025 20:14:19 +0000 (0:00:02.907) 0:05:21.299 *********** 2025-06-22 20:17:16.037617 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.037624 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.037631 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.037642 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.037657 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.037664 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.037671 | orchestrator | 2025-06-22 20:17:16.037678 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:17:16.037685 | orchestrator | Sunday 22 June 2025 20:14:20 +0000 (0:00:00.595) 0:05:21.895 *********** 2025-06-22 20:17:16.037692 | orchestrator | 2025-06-22 20:17:16.037699 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:17:16.037706 | orchestrator | Sunday 22 June 2025 20:14:20 +0000 (0:00:00.379) 0:05:22.274 *********** 2025-06-22 20:17:16.037713 | orchestrator | 2025-06-22 20:17:16.037720 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:17:16.037727 | orchestrator | Sunday 22 June 2025 20:14:20 +0000 (0:00:00.137) 0:05:22.412 *********** 2025-06-22 20:17:16.037734 | orchestrator | 2025-06-22 20:17:16.037741 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:17:16.037748 | orchestrator | Sunday 22 June 2025 20:14:21 +0000 (0:00:00.138) 0:05:22.551 *********** 2025-06-22 20:17:16.037755 | orchestrator | 2025-06-22 20:17:16.037762 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:17:16.037769 | orchestrator | Sunday 22 June 2025 20:14:21 +0000 (0:00:00.141) 0:05:22.693 *********** 2025-06-22 20:17:16.037776 | orchestrator | 2025-06-22 20:17:16.037786 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:17:16.037793 | orchestrator | Sunday 22 June 2025 20:14:21 +0000 (0:00:00.132) 0:05:22.826 *********** 2025-06-22 20:17:16.037800 | orchestrator | 2025-06-22 20:17:16.037807 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-22 20:17:16.037815 | orchestrator | Sunday 22 June 2025 20:14:21 +0000 (0:00:00.131) 0:05:22.957 *********** 2025-06-22 20:17:16.037821 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.037828 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:17:16.037835 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:17:16.037842 | orchestrator | 2025-06-22 20:17:16.037849 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-22 20:17:16.037856 | orchestrator | Sunday 22 June 2025 20:14:28 +0000 (0:00:07.115) 0:05:30.072 *********** 2025-06-22 20:17:16.037863 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.037870 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:17:16.037877 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:17:16.037884 | orchestrator | 2025-06-22 20:17:16.037891 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-22 20:17:16.037898 | orchestrator | Sunday 22 June 2025 20:14:40 +0000 (0:00:11.629) 0:05:41.702 *********** 2025-06-22 20:17:16.037905 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:17:16.037912 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:17:16.037919 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:17:16.037926 | orchestrator | 2025-06-22 20:17:16.037933 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-22 20:17:16.037940 | orchestrator | Sunday 22 June 2025 20:15:05 +0000 (0:00:25.382) 0:06:07.085 *********** 2025-06-22 20:17:16.037947 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:17:16.037954 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:17:16.037961 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:17:16.037968 | orchestrator | 2025-06-22 20:17:16.037975 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-22 20:17:16.037982 | orchestrator | Sunday 22 June 2025 20:15:44 +0000 (0:00:38.539) 0:06:45.624 *********** 2025-06-22 20:17:16.037989 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:17:16.037996 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:17:16.038003 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:17:16.038010 | orchestrator | 2025-06-22 20:17:16.038038 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-22 20:17:16.038045 | orchestrator | Sunday 22 June 2025 20:15:45 +0000 (0:00:01.129) 0:06:46.754 *********** 2025-06-22 20:17:16.038057 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:17:16.038064 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:17:16.038071 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:17:16.038077 | orchestrator | 2025-06-22 20:17:16.038084 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-22 20:17:16.038091 | orchestrator | Sunday 22 June 2025 20:15:45 +0000 (0:00:00.768) 0:06:47.522 *********** 2025-06-22 20:17:16.038098 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:17:16.038104 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:17:16.038111 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:17:16.038118 | orchestrator | 2025-06-22 20:17:16.038125 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-22 20:17:16.038132 | orchestrator | Sunday 22 June 2025 20:16:05 +0000 (0:00:19.998) 0:07:07.521 *********** 2025-06-22 20:17:16.038138 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.038145 | orchestrator | 2025-06-22 20:17:16.038152 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-22 20:17:16.038158 | orchestrator | Sunday 22 June 2025 20:16:06 +0000 (0:00:00.129) 0:07:07.651 *********** 2025-06-22 20:17:16.038165 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.038172 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.038179 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.038185 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.038192 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.038199 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-22 20:17:16.038206 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:17:16.038213 | orchestrator | 2025-06-22 20:17:16.038220 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-22 20:17:16.038226 | orchestrator | Sunday 22 June 2025 20:16:28 +0000 (0:00:22.829) 0:07:30.480 *********** 2025-06-22 20:17:16.038233 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.038240 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.038247 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.038253 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.038263 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.038270 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.038277 | orchestrator | 2025-06-22 20:17:16.038284 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-22 20:17:16.038290 | orchestrator | Sunday 22 June 2025 20:16:37 +0000 (0:00:08.361) 0:07:38.842 *********** 2025-06-22 20:17:16.038297 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.038304 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.038311 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.038318 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.038324 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.038331 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-06-22 20:17:16.038338 | orchestrator | 2025-06-22 20:17:16.038344 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-22 20:17:16.038351 | orchestrator | Sunday 22 June 2025 20:16:41 +0000 (0:00:04.011) 0:07:42.853 *********** 2025-06-22 20:17:16.038358 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:17:16.038364 | orchestrator | 2025-06-22 20:17:16.038371 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-22 20:17:16.038378 | orchestrator | Sunday 22 June 2025 20:16:52 +0000 (0:00:10.983) 0:07:53.836 *********** 2025-06-22 20:17:16.038388 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:17:16.038395 | orchestrator | 2025-06-22 20:17:16.038402 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-22 20:17:16.038409 | orchestrator | Sunday 22 June 2025 20:16:53 +0000 (0:00:01.311) 0:07:55.148 *********** 2025-06-22 20:17:16.038421 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.038428 | orchestrator | 2025-06-22 20:17:16.038435 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-22 20:17:16.038441 | orchestrator | Sunday 22 June 2025 20:16:54 +0000 (0:00:01.376) 0:07:56.524 *********** 2025-06-22 20:17:16.038448 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:17:16.038455 | orchestrator | 2025-06-22 20:17:16.038462 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-22 20:17:16.038469 | orchestrator | Sunday 22 June 2025 20:17:06 +0000 (0:00:11.595) 0:08:08.120 *********** 2025-06-22 20:17:16.038475 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:17:16.038482 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:17:16.038489 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:17:16.038496 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:17:16.038502 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:17:16.038509 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:17:16.038516 | orchestrator | 2025-06-22 20:17:16.038523 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-22 20:17:16.038529 | orchestrator | 2025-06-22 20:17:16.038536 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-22 20:17:16.038543 | orchestrator | Sunday 22 June 2025 20:17:08 +0000 (0:00:01.844) 0:08:09.964 *********** 2025-06-22 20:17:16.038549 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:17:16.038556 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:17:16.038563 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:17:16.038569 | orchestrator | 2025-06-22 20:17:16.038576 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-22 20:17:16.038583 | orchestrator | 2025-06-22 20:17:16.038590 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-22 20:17:16.038610 | orchestrator | Sunday 22 June 2025 20:17:09 +0000 (0:00:01.106) 0:08:11.070 *********** 2025-06-22 20:17:16.038617 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.038624 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.038631 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.038637 | orchestrator | 2025-06-22 20:17:16.038644 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-22 20:17:16.038651 | orchestrator | 2025-06-22 20:17:16.038658 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-22 20:17:16.038665 | orchestrator | Sunday 22 June 2025 20:17:10 +0000 (0:00:00.526) 0:08:11.597 *********** 2025-06-22 20:17:16.038671 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-22 20:17:16.038678 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-22 20:17:16.038685 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-22 20:17:16.038692 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-22 20:17:16.038698 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-22 20:17:16.038705 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-22 20:17:16.038712 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:17:16.038719 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-22 20:17:16.038725 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-22 20:17:16.038732 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-22 20:17:16.038739 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-22 20:17:16.038746 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-22 20:17:16.038752 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-22 20:17:16.038759 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:17:16.038766 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-22 20:17:16.038773 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-22 20:17:16.038784 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-22 20:17:16.038791 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-22 20:17:16.038798 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-22 20:17:16.038804 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-22 20:17:16.038811 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:17:16.038818 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-22 20:17:16.038828 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-22 20:17:16.038835 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-22 20:17:16.038842 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-22 20:17:16.038848 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-22 20:17:16.038855 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-22 20:17:16.038862 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.038868 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-22 20:17:16.038875 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-22 20:17:16.038882 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-22 20:17:16.038888 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-22 20:17:16.038895 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-22 20:17:16.038902 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-22 20:17:16.038908 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.038915 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-22 20:17:16.038922 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-22 20:17:16.038932 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-22 20:17:16.038939 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-22 20:17:16.038945 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-22 20:17:16.038952 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-22 20:17:16.038959 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.038965 | orchestrator | 2025-06-22 20:17:16.038972 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-22 20:17:16.038979 | orchestrator | 2025-06-22 20:17:16.038985 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-22 20:17:16.038992 | orchestrator | Sunday 22 June 2025 20:17:11 +0000 (0:00:01.230) 0:08:12.827 *********** 2025-06-22 20:17:16.038999 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-22 20:17:16.039005 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-22 20:17:16.039012 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.039019 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-22 20:17:16.039025 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-22 20:17:16.039032 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.039039 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-22 20:17:16.039045 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-22 20:17:16.039052 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.039058 | orchestrator | 2025-06-22 20:17:16.039065 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-22 20:17:16.039072 | orchestrator | 2025-06-22 20:17:16.039078 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-22 20:17:16.039085 | orchestrator | Sunday 22 June 2025 20:17:12 +0000 (0:00:00.709) 0:08:13.537 *********** 2025-06-22 20:17:16.039092 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.039098 | orchestrator | 2025-06-22 20:17:16.039105 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-22 20:17:16.039116 | orchestrator | 2025-06-22 20:17:16.039123 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-22 20:17:16.039129 | orchestrator | Sunday 22 June 2025 20:17:12 +0000 (0:00:00.633) 0:08:14.170 *********** 2025-06-22 20:17:16.039136 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:17:16.039143 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:17:16.039149 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:17:16.039156 | orchestrator | 2025-06-22 20:17:16.039163 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:17:16.039169 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:17:16.039177 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-22 20:17:16.039184 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-22 20:17:16.039191 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-22 20:17:16.039198 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-22 20:17:16.039204 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-06-22 20:17:16.039211 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-22 20:17:16.039218 | orchestrator | 2025-06-22 20:17:16.039225 | orchestrator | 2025-06-22 20:17:16.039231 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:17:16.039238 | orchestrator | Sunday 22 June 2025 20:17:13 +0000 (0:00:00.460) 0:08:14.631 *********** 2025-06-22 20:17:16.039245 | orchestrator | =============================================================================== 2025-06-22 20:17:16.039255 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 38.54s 2025-06-22 20:17:16.039262 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.49s 2025-06-22 20:17:16.039268 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.38s 2025-06-22 20:17:16.039275 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.26s 2025-06-22 20:17:16.039282 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.83s 2025-06-22 20:17:16.039288 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 20.00s 2025-06-22 20:17:16.039295 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.79s 2025-06-22 20:17:16.039301 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.09s 2025-06-22 20:17:16.039308 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.65s 2025-06-22 20:17:16.039314 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.63s 2025-06-22 20:17:16.039321 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.60s 2025-06-22 20:17:16.039328 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.16s 2025-06-22 20:17:16.039337 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.00s 2025-06-22 20:17:16.039347 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.98s 2025-06-22 20:17:16.039354 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.95s 2025-06-22 20:17:16.039361 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.82s 2025-06-22 20:17:16.039372 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.47s 2025-06-22 20:17:16.039379 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 9.16s 2025-06-22 20:17:16.039385 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.36s 2025-06-22 20:17:16.039392 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.12s 2025-06-22 20:17:16.039399 | orchestrator | 2025-06-22 20:17:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:19.072251 | orchestrator | 2025-06-22 20:17:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:22.114114 | orchestrator | 2025-06-22 20:17:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:25.151785 | orchestrator | 2025-06-22 20:17:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:28.195262 | orchestrator | 2025-06-22 20:17:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:31.234527 | orchestrator | 2025-06-22 20:17:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:34.274304 | orchestrator | 2025-06-22 20:17:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:37.317689 | orchestrator | 2025-06-22 20:17:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:40.357020 | orchestrator | 2025-06-22 20:17:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:43.401529 | orchestrator | 2025-06-22 20:17:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:46.441175 | orchestrator | 2025-06-22 20:17:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:49.475474 | orchestrator | 2025-06-22 20:17:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:52.512981 | orchestrator | 2025-06-22 20:17:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:55.557612 | orchestrator | 2025-06-22 20:17:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:17:58.595641 | orchestrator | 2025-06-22 20:17:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:18:01.640158 | orchestrator | 2025-06-22 20:18:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:18:04.690293 | orchestrator | 2025-06-22 20:18:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:18:07.731663 | orchestrator | 2025-06-22 20:18:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:18:10.776819 | orchestrator | 2025-06-22 20:18:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:18:13.820483 | orchestrator | 2025-06-22 20:18:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:18:16.857030 | orchestrator | 2025-06-22 20:18:17.121376 | orchestrator | 2025-06-22 20:18:17.128945 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Jun 22 20:18:17 UTC 2025 2025-06-22 20:18:17.129020 | orchestrator | 2025-06-22 20:18:17.432709 | orchestrator | ok: Runtime: 0:35:52.577329 2025-06-22 20:18:17.666851 | 2025-06-22 20:18:17.666960 | TASK [Bootstrap services] 2025-06-22 20:18:18.389398 | orchestrator | 2025-06-22 20:18:18.389611 | orchestrator | # BOOTSTRAP 2025-06-22 20:18:18.389636 | orchestrator | 2025-06-22 20:18:18.389650 | orchestrator | + set -e 2025-06-22 20:18:18.389663 | orchestrator | + echo 2025-06-22 20:18:18.389677 | orchestrator | + echo '# BOOTSTRAP' 2025-06-22 20:18:18.389695 | orchestrator | + echo 2025-06-22 20:18:18.389739 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-22 20:18:18.396070 | orchestrator | + set -e 2025-06-22 20:18:18.396143 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-22 20:18:21.867289 | orchestrator | 2025-06-22 20:18:21 | INFO  | It takes a moment until task d1ccf0ec-8071-49be-ab92-295192f3b35e (flavor-manager) has been started and output is visible here. 2025-06-22 20:18:25.489025 | orchestrator | 2025-06-22 20:18:25 | INFO  | Flavor SCS-1V-4 created 2025-06-22 20:18:25.958735 | orchestrator | 2025-06-22 20:18:25 | INFO  | Flavor SCS-2V-8 created 2025-06-22 20:18:26.284943 | orchestrator | 2025-06-22 20:18:26 | INFO  | Flavor SCS-4V-16 created 2025-06-22 20:18:26.421946 | orchestrator | 2025-06-22 20:18:26 | INFO  | Flavor SCS-8V-32 created 2025-06-22 20:18:26.558131 | orchestrator | 2025-06-22 20:18:26 | INFO  | Flavor SCS-1V-2 created 2025-06-22 20:18:26.672986 | orchestrator | 2025-06-22 20:18:26 | INFO  | Flavor SCS-2V-4 created 2025-06-22 20:18:26.796106 | orchestrator | 2025-06-22 20:18:26 | INFO  | Flavor SCS-4V-8 created 2025-06-22 20:18:26.921119 | orchestrator | 2025-06-22 20:18:26 | INFO  | Flavor SCS-8V-16 created 2025-06-22 20:18:27.059817 | orchestrator | 2025-06-22 20:18:27 | INFO  | Flavor SCS-16V-32 created 2025-06-22 20:18:27.182107 | orchestrator | 2025-06-22 20:18:27 | INFO  | Flavor SCS-1V-8 created 2025-06-22 20:18:27.313399 | orchestrator | 2025-06-22 20:18:27 | INFO  | Flavor SCS-2V-16 created 2025-06-22 20:18:27.443139 | orchestrator | 2025-06-22 20:18:27 | INFO  | Flavor SCS-4V-32 created 2025-06-22 20:18:27.565540 | orchestrator | 2025-06-22 20:18:27 | INFO  | Flavor SCS-1L-1 created 2025-06-22 20:18:27.687654 | orchestrator | 2025-06-22 20:18:27 | INFO  | Flavor SCS-2V-4-20s created 2025-06-22 20:18:27.818910 | orchestrator | 2025-06-22 20:18:27 | INFO  | Flavor SCS-4V-16-100s created 2025-06-22 20:18:27.963647 | orchestrator | 2025-06-22 20:18:27 | INFO  | Flavor SCS-1V-4-10 created 2025-06-22 20:18:28.074450 | orchestrator | 2025-06-22 20:18:28 | INFO  | Flavor SCS-2V-8-20 created 2025-06-22 20:18:28.203690 | orchestrator | 2025-06-22 20:18:28 | INFO  | Flavor SCS-4V-16-50 created 2025-06-22 20:18:28.324326 | orchestrator | 2025-06-22 20:18:28 | INFO  | Flavor SCS-8V-32-100 created 2025-06-22 20:18:28.438783 | orchestrator | 2025-06-22 20:18:28 | INFO  | Flavor SCS-1V-2-5 created 2025-06-22 20:18:28.565582 | orchestrator | 2025-06-22 20:18:28 | INFO  | Flavor SCS-2V-4-10 created 2025-06-22 20:18:28.698824 | orchestrator | 2025-06-22 20:18:28 | INFO  | Flavor SCS-4V-8-20 created 2025-06-22 20:18:28.827725 | orchestrator | 2025-06-22 20:18:28 | INFO  | Flavor SCS-8V-16-50 created 2025-06-22 20:18:28.957332 | orchestrator | 2025-06-22 20:18:28 | INFO  | Flavor SCS-16V-32-100 created 2025-06-22 20:18:29.083245 | orchestrator | 2025-06-22 20:18:29 | INFO  | Flavor SCS-1V-8-20 created 2025-06-22 20:18:29.217757 | orchestrator | 2025-06-22 20:18:29 | INFO  | Flavor SCS-2V-16-50 created 2025-06-22 20:18:29.341617 | orchestrator | 2025-06-22 20:18:29 | INFO  | Flavor SCS-4V-32-100 created 2025-06-22 20:18:29.470720 | orchestrator | 2025-06-22 20:18:29 | INFO  | Flavor SCS-1L-1-5 created 2025-06-22 20:18:31.590116 | orchestrator | 2025-06-22 20:18:31 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-22 20:18:31.595417 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:18:31.595569 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:18:31.595617 | orchestrator | Registering Redlock._release_script 2025-06-22 20:18:31.652618 | orchestrator | 2025-06-22 20:18:31 | INFO  | Task 6ed0a91b-abf7-4ed2-a1f1-274eaead7c5b (bootstrap-basic) was prepared for execution. 2025-06-22 20:18:31.652715 | orchestrator | 2025-06-22 20:18:31 | INFO  | It takes a moment until task 6ed0a91b-abf7-4ed2-a1f1-274eaead7c5b (bootstrap-basic) has been started and output is visible here. 2025-06-22 20:18:35.680260 | orchestrator | 2025-06-22 20:18:35.680954 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-22 20:18:35.682845 | orchestrator | 2025-06-22 20:18:35.682883 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 20:18:35.683603 | orchestrator | Sunday 22 June 2025 20:18:35 +0000 (0:00:00.075) 0:00:00.075 *********** 2025-06-22 20:18:37.460938 | orchestrator | ok: [localhost] 2025-06-22 20:18:37.461110 | orchestrator | 2025-06-22 20:18:37.462138 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-22 20:18:37.463107 | orchestrator | Sunday 22 June 2025 20:18:37 +0000 (0:00:01.781) 0:00:01.857 *********** 2025-06-22 20:18:44.876030 | orchestrator | ok: [localhost] 2025-06-22 20:18:44.876165 | orchestrator | 2025-06-22 20:18:44.876182 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-22 20:18:44.876251 | orchestrator | Sunday 22 June 2025 20:18:44 +0000 (0:00:07.416) 0:00:09.273 *********** 2025-06-22 20:18:51.264356 | orchestrator | changed: [localhost] 2025-06-22 20:18:51.264619 | orchestrator | 2025-06-22 20:18:51.265384 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-22 20:18:51.266134 | orchestrator | Sunday 22 June 2025 20:18:51 +0000 (0:00:06.386) 0:00:15.659 *********** 2025-06-22 20:18:57.236672 | orchestrator | ok: [localhost] 2025-06-22 20:18:57.236766 | orchestrator | 2025-06-22 20:18:57.236852 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-22 20:18:57.237743 | orchestrator | Sunday 22 June 2025 20:18:57 +0000 (0:00:05.971) 0:00:21.631 *********** 2025-06-22 20:19:04.034417 | orchestrator | changed: [localhost] 2025-06-22 20:19:04.034611 | orchestrator | 2025-06-22 20:19:04.035412 | orchestrator | TASK [Create public network] *************************************************** 2025-06-22 20:19:04.037764 | orchestrator | Sunday 22 June 2025 20:19:04 +0000 (0:00:06.794) 0:00:28.426 *********** 2025-06-22 20:19:11.309877 | orchestrator | changed: [localhost] 2025-06-22 20:19:11.310277 | orchestrator | 2025-06-22 20:19:11.310638 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-22 20:19:11.311362 | orchestrator | Sunday 22 June 2025 20:19:11 +0000 (0:00:07.278) 0:00:35.704 *********** 2025-06-22 20:19:17.797692 | orchestrator | changed: [localhost] 2025-06-22 20:19:17.797807 | orchestrator | 2025-06-22 20:19:17.798121 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-22 20:19:17.798473 | orchestrator | Sunday 22 June 2025 20:19:17 +0000 (0:00:06.488) 0:00:42.193 *********** 2025-06-22 20:19:23.008391 | orchestrator | changed: [localhost] 2025-06-22 20:19:23.009008 | orchestrator | 2025-06-22 20:19:23.009124 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-22 20:19:23.009145 | orchestrator | Sunday 22 June 2025 20:19:23 +0000 (0:00:05.211) 0:00:47.404 *********** 2025-06-22 20:19:27.227368 | orchestrator | changed: [localhost] 2025-06-22 20:19:27.229346 | orchestrator | 2025-06-22 20:19:27.230138 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-22 20:19:27.232168 | orchestrator | Sunday 22 June 2025 20:19:27 +0000 (0:00:04.216) 0:00:51.621 *********** 2025-06-22 20:19:30.613245 | orchestrator | ok: [localhost] 2025-06-22 20:19:30.613404 | orchestrator | 2025-06-22 20:19:30.613628 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:19:30.614242 | orchestrator | 2025-06-22 20:19:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 20:19:30.614338 | orchestrator | 2025-06-22 20:19:30 | INFO  | Please wait and do not abort execution. 2025-06-22 20:19:30.614744 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:19:30.616112 | orchestrator | 2025-06-22 20:19:30.617565 | orchestrator | 2025-06-22 20:19:30.618269 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:19:30.619254 | orchestrator | Sunday 22 June 2025 20:19:30 +0000 (0:00:03.386) 0:00:55.008 *********** 2025-06-22 20:19:30.619532 | orchestrator | =============================================================================== 2025-06-22 20:19:30.620206 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.42s 2025-06-22 20:19:30.620524 | orchestrator | Create public network --------------------------------------------------- 7.28s 2025-06-22 20:19:30.621027 | orchestrator | Create volume type local ------------------------------------------------ 6.79s 2025-06-22 20:19:30.621307 | orchestrator | Set public network to default ------------------------------------------- 6.49s 2025-06-22 20:19:30.621930 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.39s 2025-06-22 20:19:30.622238 | orchestrator | Get volume type local --------------------------------------------------- 5.97s 2025-06-22 20:19:30.622750 | orchestrator | Create public subnet ---------------------------------------------------- 5.21s 2025-06-22 20:19:30.623148 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.22s 2025-06-22 20:19:30.623578 | orchestrator | Create manager role ----------------------------------------------------- 3.39s 2025-06-22 20:19:30.623968 | orchestrator | Gathering Facts --------------------------------------------------------- 1.78s 2025-06-22 20:19:32.826523 | orchestrator | 2025-06-22 20:19:32 | INFO  | It takes a moment until task 86ed11bb-1d48-4e61-9b60-33334a17cf53 (image-manager) has been started and output is visible here. 2025-06-22 20:19:36.249017 | orchestrator | 2025-06-22 20:19:36 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-22 20:19:36.472632 | orchestrator | 2025-06-22 20:19:36 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-22 20:19:36.474869 | orchestrator | 2025-06-22 20:19:36 | INFO  | Importing image Cirros 0.6.2 2025-06-22 20:19:36.474903 | orchestrator | 2025-06-22 20:19:36 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-22 20:19:38.198842 | orchestrator | 2025-06-22 20:19:38 | INFO  | Waiting for image to leave queued state... 2025-06-22 20:19:40.248289 | orchestrator | 2025-06-22 20:19:40 | INFO  | Waiting for import to complete... 2025-06-22 20:19:50.386715 | orchestrator | 2025-06-22 20:19:50 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-22 20:19:50.575409 | orchestrator | 2025-06-22 20:19:50 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-22 20:19:50.577372 | orchestrator | 2025-06-22 20:19:50 | INFO  | Setting internal_version = 0.6.2 2025-06-22 20:19:50.577662 | orchestrator | 2025-06-22 20:19:50 | INFO  | Setting image_original_user = cirros 2025-06-22 20:19:50.578666 | orchestrator | 2025-06-22 20:19:50 | INFO  | Adding tag os:cirros 2025-06-22 20:19:50.877778 | orchestrator | 2025-06-22 20:19:50 | INFO  | Setting property architecture: x86_64 2025-06-22 20:19:51.131625 | orchestrator | 2025-06-22 20:19:51 | INFO  | Setting property hw_disk_bus: scsi 2025-06-22 20:19:51.340713 | orchestrator | 2025-06-22 20:19:51 | INFO  | Setting property hw_rng_model: virtio 2025-06-22 20:19:51.562731 | orchestrator | 2025-06-22 20:19:51 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-22 20:19:51.757932 | orchestrator | 2025-06-22 20:19:51 | INFO  | Setting property hw_watchdog_action: reset 2025-06-22 20:19:51.915294 | orchestrator | 2025-06-22 20:19:51 | INFO  | Setting property hypervisor_type: qemu 2025-06-22 20:19:52.088942 | orchestrator | 2025-06-22 20:19:52 | INFO  | Setting property os_distro: cirros 2025-06-22 20:19:52.287864 | orchestrator | 2025-06-22 20:19:52 | INFO  | Setting property replace_frequency: never 2025-06-22 20:19:52.444169 | orchestrator | 2025-06-22 20:19:52 | INFO  | Setting property uuid_validity: none 2025-06-22 20:19:52.613813 | orchestrator | 2025-06-22 20:19:52 | INFO  | Setting property provided_until: none 2025-06-22 20:19:52.785469 | orchestrator | 2025-06-22 20:19:52 | INFO  | Setting property image_description: Cirros 2025-06-22 20:19:52.955967 | orchestrator | 2025-06-22 20:19:52 | INFO  | Setting property image_name: Cirros 2025-06-22 20:19:53.121518 | orchestrator | 2025-06-22 20:19:53 | INFO  | Setting property internal_version: 0.6.2 2025-06-22 20:19:53.293687 | orchestrator | 2025-06-22 20:19:53 | INFO  | Setting property image_original_user: cirros 2025-06-22 20:19:53.458389 | orchestrator | 2025-06-22 20:19:53 | INFO  | Setting property os_version: 0.6.2 2025-06-22 20:19:53.614896 | orchestrator | 2025-06-22 20:19:53 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-22 20:19:54.048711 | orchestrator | 2025-06-22 20:19:54 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-22 20:19:54.255390 | orchestrator | 2025-06-22 20:19:54 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-22 20:19:54.255640 | orchestrator | 2025-06-22 20:19:54 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-22 20:19:54.256907 | orchestrator | 2025-06-22 20:19:54 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-22 20:19:54.457911 | orchestrator | 2025-06-22 20:19:54 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-22 20:19:54.664136 | orchestrator | 2025-06-22 20:19:54 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-22 20:19:54.664882 | orchestrator | 2025-06-22 20:19:54 | INFO  | Importing image Cirros 0.6.3 2025-06-22 20:19:54.665677 | orchestrator | 2025-06-22 20:19:54 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-22 20:19:54.975933 | orchestrator | 2025-06-22 20:19:54 | INFO  | Waiting for image to leave queued state... 2025-06-22 20:19:57.023977 | orchestrator | 2025-06-22 20:19:57 | INFO  | Waiting for import to complete... 2025-06-22 20:20:07.141011 | orchestrator | 2025-06-22 20:20:07 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-22 20:20:07.580063 | orchestrator | 2025-06-22 20:20:07 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-22 20:20:07.580941 | orchestrator | 2025-06-22 20:20:07 | INFO  | Setting internal_version = 0.6.3 2025-06-22 20:20:07.581633 | orchestrator | 2025-06-22 20:20:07 | INFO  | Setting image_original_user = cirros 2025-06-22 20:20:07.582385 | orchestrator | 2025-06-22 20:20:07 | INFO  | Adding tag os:cirros 2025-06-22 20:20:07.835372 | orchestrator | 2025-06-22 20:20:07 | INFO  | Setting property architecture: x86_64 2025-06-22 20:20:08.169764 | orchestrator | 2025-06-22 20:20:08 | INFO  | Setting property hw_disk_bus: scsi 2025-06-22 20:20:08.391868 | orchestrator | 2025-06-22 20:20:08 | INFO  | Setting property hw_rng_model: virtio 2025-06-22 20:20:08.637261 | orchestrator | 2025-06-22 20:20:08 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-22 20:20:08.848971 | orchestrator | 2025-06-22 20:20:08 | INFO  | Setting property hw_watchdog_action: reset 2025-06-22 20:20:09.062898 | orchestrator | 2025-06-22 20:20:09 | INFO  | Setting property hypervisor_type: qemu 2025-06-22 20:20:09.258003 | orchestrator | 2025-06-22 20:20:09 | INFO  | Setting property os_distro: cirros 2025-06-22 20:20:09.473440 | orchestrator | 2025-06-22 20:20:09 | INFO  | Setting property replace_frequency: never 2025-06-22 20:20:09.683438 | orchestrator | 2025-06-22 20:20:09 | INFO  | Setting property uuid_validity: none 2025-06-22 20:20:09.882801 | orchestrator | 2025-06-22 20:20:09 | INFO  | Setting property provided_until: none 2025-06-22 20:20:10.090769 | orchestrator | 2025-06-22 20:20:10 | INFO  | Setting property image_description: Cirros 2025-06-22 20:20:10.282146 | orchestrator | 2025-06-22 20:20:10 | INFO  | Setting property image_name: Cirros 2025-06-22 20:20:10.444157 | orchestrator | 2025-06-22 20:20:10 | INFO  | Setting property internal_version: 0.6.3 2025-06-22 20:20:10.672515 | orchestrator | 2025-06-22 20:20:10 | INFO  | Setting property image_original_user: cirros 2025-06-22 20:20:10.891122 | orchestrator | 2025-06-22 20:20:10 | INFO  | Setting property os_version: 0.6.3 2025-06-22 20:20:11.053229 | orchestrator | 2025-06-22 20:20:11 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-22 20:20:11.216666 | orchestrator | 2025-06-22 20:20:11 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-22 20:20:11.395968 | orchestrator | 2025-06-22 20:20:11 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-22 20:20:11.397360 | orchestrator | 2025-06-22 20:20:11 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-22 20:20:11.398360 | orchestrator | 2025-06-22 20:20:11 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-22 20:20:12.486163 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-22 20:20:14.346292 | orchestrator | 2025-06-22 20:20:14 | INFO  | date: 2025-06-22 2025-06-22 20:20:14.346367 | orchestrator | 2025-06-22 20:20:14 | INFO  | image: octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 20:20:14.346458 | orchestrator | 2025-06-22 20:20:14 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 20:20:14.346487 | orchestrator | 2025-06-22 20:20:14 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2.CHECKSUM 2025-06-22 20:20:14.373963 | orchestrator | 2025-06-22 20:20:14 | INFO  | checksum: 77df9fefb5aab55dc760a767e58162a9735f5740229c1da42280293548a761a7 2025-06-22 20:20:14.444707 | orchestrator | 2025-06-22 20:20:14 | INFO  | It takes a moment until task ca61908c-1df3-4c30-a78b-8f0a0c072602 (image-manager) has been started and output is visible here. 2025-06-22 20:20:14.686741 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-22 20:20:14.686946 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-22 20:20:16.891965 | orchestrator | 2025-06-22 20:20:16 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 20:20:16.912945 | orchestrator | 2025-06-22 20:20:16 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2: 200 2025-06-22 20:20:16.913628 | orchestrator | 2025-06-22 20:20:16 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-22 2025-06-22 20:20:16.914326 | orchestrator | 2025-06-22 20:20:16 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 20:20:17.246220 | orchestrator | 2025-06-22 20:20:17 | INFO  | Waiting for image to leave queued state... 2025-06-22 20:20:19.302359 | orchestrator | 2025-06-22 20:20:19 | INFO  | Waiting for import to complete... 2025-06-22 20:20:29.583882 | orchestrator | 2025-06-22 20:20:29 | INFO  | Waiting for import to complete... 2025-06-22 20:20:39.679293 | orchestrator | 2025-06-22 20:20:39 | INFO  | Waiting for import to complete... 2025-06-22 20:20:49.761231 | orchestrator | 2025-06-22 20:20:49 | INFO  | Waiting for import to complete... 2025-06-22 20:20:59.854727 | orchestrator | 2025-06-22 20:20:59 | INFO  | Waiting for import to complete... 2025-06-22 20:21:09.979818 | orchestrator | 2025-06-22 20:21:09 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-22' successfully completed, reloading images 2025-06-22 20:21:10.291682 | orchestrator | 2025-06-22 20:21:10 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 20:21:10.292387 | orchestrator | 2025-06-22 20:21:10 | INFO  | Setting internal_version = 2025-06-22 2025-06-22 20:21:10.293229 | orchestrator | 2025-06-22 20:21:10 | INFO  | Setting image_original_user = ubuntu 2025-06-22 20:21:10.294249 | orchestrator | 2025-06-22 20:21:10 | INFO  | Adding tag amphora 2025-06-22 20:21:10.500136 | orchestrator | 2025-06-22 20:21:10 | INFO  | Adding tag os:ubuntu 2025-06-22 20:21:10.707295 | orchestrator | 2025-06-22 20:21:10 | INFO  | Setting property architecture: x86_64 2025-06-22 20:21:10.920148 | orchestrator | 2025-06-22 20:21:10 | INFO  | Setting property hw_disk_bus: scsi 2025-06-22 20:21:11.140552 | orchestrator | 2025-06-22 20:21:11 | INFO  | Setting property hw_rng_model: virtio 2025-06-22 20:21:11.338955 | orchestrator | 2025-06-22 20:21:11 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-22 20:21:11.546969 | orchestrator | 2025-06-22 20:21:11 | INFO  | Setting property hw_watchdog_action: reset 2025-06-22 20:21:11.759026 | orchestrator | 2025-06-22 20:21:11 | INFO  | Setting property hypervisor_type: qemu 2025-06-22 20:21:11.983825 | orchestrator | 2025-06-22 20:21:11 | INFO  | Setting property os_distro: ubuntu 2025-06-22 20:21:12.186863 | orchestrator | 2025-06-22 20:21:12 | INFO  | Setting property replace_frequency: quarterly 2025-06-22 20:21:12.398694 | orchestrator | 2025-06-22 20:21:12 | INFO  | Setting property uuid_validity: last-1 2025-06-22 20:21:12.606524 | orchestrator | 2025-06-22 20:21:12 | INFO  | Setting property provided_until: none 2025-06-22 20:21:12.826590 | orchestrator | 2025-06-22 20:21:12 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-22 20:21:13.069362 | orchestrator | 2025-06-22 20:21:13 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-22 20:21:13.281162 | orchestrator | 2025-06-22 20:21:13 | INFO  | Setting property internal_version: 2025-06-22 2025-06-22 20:21:13.487770 | orchestrator | 2025-06-22 20:21:13 | INFO  | Setting property image_original_user: ubuntu 2025-06-22 20:21:13.697665 | orchestrator | 2025-06-22 20:21:13 | INFO  | Setting property os_version: 2025-06-22 2025-06-22 20:21:13.918514 | orchestrator | 2025-06-22 20:21:13 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 20:21:14.134602 | orchestrator | 2025-06-22 20:21:14 | INFO  | Setting property image_build_date: 2025-06-22 2025-06-22 20:21:14.372102 | orchestrator | 2025-06-22 20:21:14 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 20:21:14.372697 | orchestrator | 2025-06-22 20:21:14 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 20:21:14.533679 | orchestrator | 2025-06-22 20:21:14 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-22 20:21:14.533838 | orchestrator | 2025-06-22 20:21:14 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-22 20:21:14.533963 | orchestrator | 2025-06-22 20:21:14 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-22 20:21:14.533990 | orchestrator | 2025-06-22 20:21:14 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-22 20:21:15.402085 | orchestrator | ok: Runtime: 0:02:57.010746 2025-06-22 20:21:15.473571 | 2025-06-22 20:21:15.473719 | TASK [Run checks] 2025-06-22 20:21:16.177017 | orchestrator | + set -e 2025-06-22 20:21:16.177258 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 20:21:16.177285 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 20:21:16.177306 | orchestrator | ++ INTERACTIVE=false 2025-06-22 20:21:16.177320 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 20:21:16.177333 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 20:21:16.177347 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-22 20:21:16.178367 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-22 20:21:16.185059 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 20:21:16.185119 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 20:21:16.185140 | orchestrator | 2025-06-22 20:21:16.185162 | orchestrator | # CHECK 2025-06-22 20:21:16.185214 | orchestrator | 2025-06-22 20:21:16.185233 | orchestrator | + echo 2025-06-22 20:21:16.185266 | orchestrator | + echo '# CHECK' 2025-06-22 20:21:16.185287 | orchestrator | + echo 2025-06-22 20:21:16.185305 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 20:21:16.186655 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 20:21:16.248036 | orchestrator | 2025-06-22 20:21:16.248137 | orchestrator | ## Containers @ testbed-manager 2025-06-22 20:21:16.248154 | orchestrator | 2025-06-22 20:21:16.248169 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 20:21:16.248181 | orchestrator | + echo 2025-06-22 20:21:16.248193 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-22 20:21:16.248205 | orchestrator | + echo 2025-06-22 20:21:16.248216 | orchestrator | + osism container testbed-manager ps 2025-06-22 20:21:18.292043 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 20:21:18.292195 | orchestrator | 942c4521d5be registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-06-22 20:21:18.292223 | orchestrator | 0382c715fe3b registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2025-06-22 20:21:18.292243 | orchestrator | 2326bc4ba2e8 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-22 20:21:18.292255 | orchestrator | 2e4bef84c81d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-22 20:21:18.292267 | orchestrator | 8a11213b5244 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2025-06-22 20:21:18.292279 | orchestrator | 66238f73bd5e registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 19 minutes ago Up 18 minutes cephclient 2025-06-22 20:21:18.292295 | orchestrator | 9a072ef9bfd0 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-22 20:21:18.292307 | orchestrator | 060067d35d71 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-22 20:21:18.292318 | orchestrator | 3ebdabb401dc registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-22 20:21:18.292355 | orchestrator | dfa8f5fb3d54 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 33 minutes ago Up 32 minutes openstackclient 2025-06-22 20:21:18.292367 | orchestrator | c8933c93d9a7 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 33 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-06-22 20:21:18.292379 | orchestrator | 6051317abd97 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-22 20:21:18.292390 | orchestrator | 6c97a48a35a4 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" 56 minutes ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2025-06-22 20:21:18.292439 | orchestrator | a31b6d3ae880 registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) kolla-ansible 2025-06-22 20:21:18.292481 | orchestrator | bdf634cf9fc3 registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) osism-kubernetes 2025-06-22 20:21:18.292494 | orchestrator | 1909a2e49510 registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) osism-ansible 2025-06-22 20:21:18.292505 | orchestrator | e24af017d06f registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) ceph-ansible 2025-06-22 20:21:18.292516 | orchestrator | 11608ebedd08 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 56 minutes ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-22 20:21:18.292527 | orchestrator | 7437b873710f registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 40 minutes (healthy) manager-flower-1 2025-06-22 20:21:18.292539 | orchestrator | ae792898f104 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 40 minutes (healthy) manager-openstack-1 2025-06-22 20:21:18.292550 | orchestrator | 6cc827d66abd registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" 56 minutes ago Up 40 minutes (healthy) osismclient 2025-06-22 20:21:18.292561 | orchestrator | 7f26ead68443 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 40 minutes (healthy) manager-beat-1 2025-06-22 20:21:18.292572 | orchestrator | 5c916b9639a2 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 56 minutes ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-22 20:21:18.292593 | orchestrator | 8d0690d4909c registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 56 minutes ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2025-06-22 20:21:18.292604 | orchestrator | ab9bc4a37c55 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-22 20:21:18.292615 | orchestrator | f8aaa6bc5bc0 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 40 minutes (healthy) manager-listener-1 2025-06-22 20:21:18.292627 | orchestrator | 17153d3f2891 registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 57 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-22 20:21:18.532733 | orchestrator | 2025-06-22 20:21:18.532841 | orchestrator | ## Images @ testbed-manager 2025-06-22 20:21:18.532859 | orchestrator | 2025-06-22 20:21:18.532871 | orchestrator | + echo 2025-06-22 20:21:18.532883 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-22 20:21:18.532895 | orchestrator | + echo 2025-06-22 20:21:18.532906 | orchestrator | + osism container testbed-manager images 2025-06-22 20:21:20.505573 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 20:21:20.505703 | orchestrator | registry.osism.tech/osism/homer v25.05.2 e2c78a28297e 17 hours ago 11.5MB 2025-06-22 20:21:20.505719 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 31eca7c9891c 17 hours ago 226MB 2025-06-22 20:21:20.505734 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250530.0 f5f0b51afbcc 2 weeks ago 574MB 2025-06-22 20:21:20.505744 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250531.0 eb6fb0ff8e52 3 weeks ago 578MB 2025-06-22 20:21:20.505754 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 weeks ago 319MB 2025-06-22 20:21:20.505776 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 weeks ago 747MB 2025-06-22 20:21:20.505808 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 weeks ago 629MB 2025-06-22 20:21:20.505818 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250530 48bb7d2c6b08 3 weeks ago 892MB 2025-06-22 20:21:20.505828 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250530 3d4c4d6fe7fa 3 weeks ago 361MB 2025-06-22 20:21:20.505837 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 weeks ago 411MB 2025-06-22 20:21:20.505847 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 weeks ago 359MB 2025-06-22 20:21:20.505857 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250530 0e447338580d 3 weeks ago 457MB 2025-06-22 20:21:20.505867 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250530.0 bce894afc91f 3 weeks ago 538MB 2025-06-22 20:21:20.505876 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250530.0 467731c31786 3 weeks ago 1.21GB 2025-06-22 20:21:20.505925 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250530.0 1b4e0cdc5cdd 3 weeks ago 308MB 2025-06-22 20:21:20.505971 | orchestrator | registry.osism.tech/osism/osism 0.20250530.0 bce098659f68 3 weeks ago 297MB 2025-06-22 20:21:20.505988 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 3 weeks ago 41.4MB 2025-06-22 20:21:20.506004 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 3 weeks ago 224MB 2025-06-22 20:21:20.506061 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 6 weeks ago 453MB 2025-06-22 20:21:20.506076 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 6b3ebe9793bb 4 months ago 328MB 2025-06-22 20:21:20.506086 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-22 20:21:20.506096 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 12 months ago 146MB 2025-06-22 20:21:20.756704 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 20:21:20.757048 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 20:21:20.800658 | orchestrator | 2025-06-22 20:21:20.800749 | orchestrator | ## Containers @ testbed-node-0 2025-06-22 20:21:20.800763 | orchestrator | 2025-06-22 20:21:20.800775 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 20:21:20.800786 | orchestrator | + echo 2025-06-22 20:21:20.800799 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-22 20:21:20.800812 | orchestrator | + echo 2025-06-22 20:21:20.800823 | orchestrator | + osism container testbed-node-0 ps 2025-06-22 20:21:22.954886 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 20:21:22.955007 | orchestrator | 31cd38e0478d registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-06-22 20:21:22.955028 | orchestrator | 79141abd03a0 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_conductor 2025-06-22 20:21:22.955040 | orchestrator | 3bbf9b670364 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-22 20:21:22.955052 | orchestrator | 21ca39cae3a6 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-22 20:21:22.955063 | orchestrator | b696e639b153 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-06-22 20:21:22.955074 | orchestrator | b951f4051fb5 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-22 20:21:22.955085 | orchestrator | 40a9ae78c4c6 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-22 20:21:22.955096 | orchestrator | 532ab5124ddb registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-22 20:21:22.955107 | orchestrator | a325525ea6a9 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-22 20:21:22.955140 | orchestrator | acc097fc78d7 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-22 20:21:22.955152 | orchestrator | ea6e37522502 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-22 20:21:22.955187 | orchestrator | a2ae0dc229f4 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-22 20:21:22.955199 | orchestrator | 517fe9f2b469 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-22 20:21:22.955210 | orchestrator | ff1ebbbc5e1c registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-06-22 20:21:22.955220 | orchestrator | e6a743cf7c34 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-06-22 20:21:22.955231 | orchestrator | 685ea5ae6b1b registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-06-22 20:21:22.955242 | orchestrator | 8224cf899c90 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-06-22 20:21:22.955253 | orchestrator | adccc9c1eca3 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-06-22 20:21:22.955264 | orchestrator | 35eb2dadc04b registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-06-22 20:21:22.955295 | orchestrator | fdd890edace5 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2025-06-22 20:21:22.955307 | orchestrator | 5612e8cb512a registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2025-06-22 20:21:22.955318 | orchestrator | 7ed25572300c registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-06-22 20:21:22.955328 | orchestrator | 8a1772c21c36 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2025-06-22 20:21:22.955339 | orchestrator | e9194151f21c registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2025-06-22 20:21:22.955350 | orchestrator | 00d1e0604e4c registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-06-22 20:21:22.955361 | orchestrator | 7fdbd20709b1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2025-06-22 20:21:22.955372 | orchestrator | eda0d3cdaba3 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-06-22 20:21:22.955382 | orchestrator | dc45f69d0eb9 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-22 20:21:22.955423 | orchestrator | 04d969845043 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-22 20:21:22.955445 | orchestrator | 88317e74803a registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-06-22 20:21:22.955456 | orchestrator | bbe4cb43dddf registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-06-22 20:21:22.955467 | orchestrator | 0731c33152b6 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-22 20:21:22.955483 | orchestrator | 8fd4c1ded273 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2025-06-22 20:21:22.955495 | orchestrator | 2606e1e332cb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2025-06-22 20:21:22.955506 | orchestrator | 5a052d115603 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2025-06-22 20:21:22.955517 | orchestrator | fba8d1acf267 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-06-22 20:21:22.955527 | orchestrator | ee0ba6ee5ffe registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-06-22 20:21:22.955538 | orchestrator | 48834d8ebccd registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-06-22 20:21:22.955549 | orchestrator | 1bcb665f6bdf registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-06-22 20:21:22.955560 | orchestrator | 42ba3681dfaf registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-06-22 20:21:22.955583 | orchestrator | 9de6547c8135 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-06-22 20:21:22.955594 | orchestrator | a7ec8831abe2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-0 2025-06-22 20:21:22.955605 | orchestrator | ef3bd616b983 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-06-22 20:21:22.955616 | orchestrator | a096527af3d8 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-06-22 20:21:22.955627 | orchestrator | 7cc9180e134e registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-22 20:21:22.955638 | orchestrator | 90e43bdffe23 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-22 20:21:22.955649 | orchestrator | 486a38378122 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-22 20:21:22.955666 | orchestrator | a21a52c06970 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-22 20:21:22.955678 | orchestrator | 341d8d50ae18 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-22 20:21:22.955689 | orchestrator | dbb3b1007eb3 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-22 20:21:22.955699 | orchestrator | 8c4bb812e894 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-22 20:21:22.955710 | orchestrator | dfcac6d699fc registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-22 20:21:23.204828 | orchestrator | 2025-06-22 20:21:23.204964 | orchestrator | ## Images @ testbed-node-0 2025-06-22 20:21:23.204983 | orchestrator | 2025-06-22 20:21:23.204996 | orchestrator | + echo 2025-06-22 20:21:23.205008 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-22 20:21:23.205020 | orchestrator | + echo 2025-06-22 20:21:23.205032 | orchestrator | + osism container testbed-node-0 images 2025-06-22 20:21:25.301532 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 20:21:25.301644 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 3 weeks ago 319MB 2025-06-22 20:21:25.301657 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 weeks ago 319MB 2025-06-22 20:21:25.301668 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 3 weeks ago 330MB 2025-06-22 20:21:25.301678 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 3 weeks ago 1.59GB 2025-06-22 20:21:25.301688 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 3 weeks ago 1.55GB 2025-06-22 20:21:25.301698 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 3 weeks ago 419MB 2025-06-22 20:21:25.301708 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 weeks ago 747MB 2025-06-22 20:21:25.302499 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 3 weeks ago 327MB 2025-06-22 20:21:25.302521 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 3 weeks ago 376MB 2025-06-22 20:21:25.302532 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 weeks ago 629MB 2025-06-22 20:21:25.302544 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 3 weeks ago 1.01GB 2025-06-22 20:21:25.302555 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 3 weeks ago 591MB 2025-06-22 20:21:25.302566 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 3 weeks ago 354MB 2025-06-22 20:21:25.302577 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 3 weeks ago 352MB 2025-06-22 20:21:25.302588 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 weeks ago 411MB 2025-06-22 20:21:25.302598 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 3 weeks ago 345MB 2025-06-22 20:21:25.302677 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 weeks ago 359MB 2025-06-22 20:21:25.302690 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 3 weeks ago 325MB 2025-06-22 20:21:25.302701 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 3 weeks ago 326MB 2025-06-22 20:21:25.302713 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 3 weeks ago 1.21GB 2025-06-22 20:21:25.302725 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 3 weeks ago 362MB 2025-06-22 20:21:25.302737 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 3 weeks ago 362MB 2025-06-22 20:21:25.302763 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 3 weeks ago 1.15GB 2025-06-22 20:21:25.302773 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 3 weeks ago 1.04GB 2025-06-22 20:21:25.302783 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 3 weeks ago 1.25GB 2025-06-22 20:21:25.302793 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250530 ec3349a6437e 3 weeks ago 1.04GB 2025-06-22 20:21:25.302802 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250530 726d5cfde6f9 3 weeks ago 1.04GB 2025-06-22 20:21:25.302812 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250530 c2f966fc60ed 3 weeks ago 1.04GB 2025-06-22 20:21:25.302822 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250530 7c85bdb64788 3 weeks ago 1.04GB 2025-06-22 20:21:25.302831 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 3 weeks ago 1.2GB 2025-06-22 20:21:25.302841 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 3 weeks ago 1.31GB 2025-06-22 20:21:25.302850 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 3 weeks ago 1.12GB 2025-06-22 20:21:25.302860 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 3 weeks ago 1.12GB 2025-06-22 20:21:25.302870 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 3 weeks ago 1.1GB 2025-06-22 20:21:25.302879 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 3 weeks ago 1.1GB 2025-06-22 20:21:25.302889 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 3 weeks ago 1.1GB 2025-06-22 20:21:25.302899 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 3 weeks ago 1.41GB 2025-06-22 20:21:25.302909 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 3 weeks ago 1.41GB 2025-06-22 20:21:25.302918 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 3 weeks ago 1.06GB 2025-06-22 20:21:25.302928 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 3 weeks ago 1.06GB 2025-06-22 20:21:25.302948 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 3 weeks ago 1.05GB 2025-06-22 20:21:25.302959 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 3 weeks ago 1.05GB 2025-06-22 20:21:25.302968 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 3 weeks ago 1.05GB 2025-06-22 20:21:25.302985 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 3 weeks ago 1.05GB 2025-06-22 20:21:25.302995 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250530 aa9066568160 3 weeks ago 1.04GB 2025-06-22 20:21:25.303004 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250530 546dea2f2472 3 weeks ago 1.04GB 2025-06-22 20:21:25.303014 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 3 weeks ago 1.3GB 2025-06-22 20:21:25.303029 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 3 weeks ago 1.29GB 2025-06-22 20:21:25.303039 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 3 weeks ago 1.42GB 2025-06-22 20:21:25.303048 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 3 weeks ago 1.29GB 2025-06-22 20:21:25.303058 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 3 weeks ago 1.06GB 2025-06-22 20:21:25.303068 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 3 weeks ago 1.06GB 2025-06-22 20:21:25.303078 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 3 weeks ago 1.06GB 2025-06-22 20:21:25.303088 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 3 weeks ago 1.11GB 2025-06-22 20:21:25.303097 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 3 weeks ago 1.13GB 2025-06-22 20:21:25.303107 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 3 weeks ago 1.11GB 2025-06-22 20:21:25.303117 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250530 df0a04869ff0 3 weeks ago 1.11GB 2025-06-22 20:21:25.303126 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250530 e1b2b0cc8e5c 3 weeks ago 1.12GB 2025-06-22 20:21:25.303136 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 3 weeks ago 947MB 2025-06-22 20:21:25.303146 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 3 weeks ago 948MB 2025-06-22 20:21:25.303156 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 3 weeks ago 947MB 2025-06-22 20:21:25.303165 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 3 weeks ago 948MB 2025-06-22 20:21:25.303175 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 6 weeks ago 1.27GB 2025-06-22 20:21:25.566831 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 20:21:25.566923 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 20:21:25.617228 | orchestrator | 2025-06-22 20:21:25.617323 | orchestrator | ## Containers @ testbed-node-1 2025-06-22 20:21:25.617335 | orchestrator | 2025-06-22 20:21:25.617343 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 20:21:25.617352 | orchestrator | + echo 2025-06-22 20:21:25.617361 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-22 20:21:25.617370 | orchestrator | + echo 2025-06-22 20:21:25.617378 | orchestrator | + osism container testbed-node-1 ps 2025-06-22 20:21:27.698467 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 20:21:27.698612 | orchestrator | 5595c8eb6ef9 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-06-22 20:21:27.698654 | orchestrator | 37aca2777a9e registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 6 minutes (healthy) nova_conductor 2025-06-22 20:21:27.698667 | orchestrator | d4f7e51040ec registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-22 20:21:27.698678 | orchestrator | 693e3f2e5e82 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-22 20:21:27.698689 | orchestrator | 5f5bf63f4b45 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-22 20:21:27.698700 | orchestrator | 56abc73f43af registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-22 20:21:27.698730 | orchestrator | 6fc9bc52c989 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-22 20:21:27.698742 | orchestrator | 1d9ef0db5246 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-22 20:21:27.698764 | orchestrator | 4d13e3289eef registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-22 20:21:27.698779 | orchestrator | 050ae7c9c140 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-22 20:21:27.698790 | orchestrator | 1398dc035685 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-22 20:21:27.698801 | orchestrator | e98fecfb6072 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-22 20:21:27.698812 | orchestrator | c08e99bcaba6 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-22 20:21:27.698822 | orchestrator | 26536250b349 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-06-22 20:21:27.698833 | orchestrator | f0964ffe6345 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-06-22 20:21:27.698844 | orchestrator | d11ba40ea913 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-06-22 20:21:27.698855 | orchestrator | f0564cbf4b87 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-06-22 20:21:27.698865 | orchestrator | b4959c211a81 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-06-22 20:21:27.698876 | orchestrator | 10470113e13f registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-06-22 20:21:27.698909 | orchestrator | 918fb34f5795 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2025-06-22 20:21:27.698921 | orchestrator | 14ccc5c0f161 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2025-06-22 20:21:27.698931 | orchestrator | 3039419f147f registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-06-22 20:21:27.698942 | orchestrator | 637780994e13 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2025-06-22 20:21:27.698953 | orchestrator | 3f1b2b31db3b registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2025-06-22 20:21:27.698971 | orchestrator | 3b63d46c527e registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-06-22 20:21:27.698994 | orchestrator | 8d0a1642329a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2025-06-22 20:21:27.699014 | orchestrator | 660707a47e35 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-06-22 20:21:27.699027 | orchestrator | 52e64bfafa12 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-22 20:21:27.699040 | orchestrator | 6e4916830c8e registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 20 minutes ago Up 19 minutes (healthy) horizon 2025-06-22 20:21:27.699052 | orchestrator | 039e463605ee registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-06-22 20:21:27.699065 | orchestrator | 596bc7be3aa6 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-06-22 20:21:27.699078 | orchestrator | feaa6583d5a1 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-06-22 20:21:27.699090 | orchestrator | b2321edbe67f registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2025-06-22 20:21:27.699103 | orchestrator | ad9e3713dbdf registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-06-22 20:21:27.699115 | orchestrator | cfecfd714612 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-06-22 20:21:27.699128 | orchestrator | 5e5e8599d13d registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-06-22 20:21:27.699140 | orchestrator | ca49c36ce92a registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-06-22 20:21:27.699159 | orchestrator | 869ed23fa36a registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-06-22 20:21:27.699172 | orchestrator | f1267640d31a registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2025-06-22 20:21:27.699185 | orchestrator | c9b2a88b09e6 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2025-06-22 20:21:27.699206 | orchestrator | 3ca1fe634781 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-06-22 20:21:27.699220 | orchestrator | 6a6e1d2ae399 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-22 20:21:27.699233 | orchestrator | 416a64f2da4a registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-06-22 20:21:27.699245 | orchestrator | 9584a88ffaf0 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-1 2025-06-22 20:21:27.699257 | orchestrator | c4f680ef4165 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-06-22 20:21:27.699270 | orchestrator | e85d1fa57fc0 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-22 20:21:27.699283 | orchestrator | 53e337a38a90 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-22 20:21:27.699295 | orchestrator | cbdf3d31c4cb registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-22 20:21:27.699307 | orchestrator | 58ac40ac42f6 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-22 20:21:27.699320 | orchestrator | 60dd1bc89cb2 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-22 20:21:27.699338 | orchestrator | 85628ad045af registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-22 20:21:27.699349 | orchestrator | 014edb7272b0 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-22 20:21:27.925484 | orchestrator | 2025-06-22 20:21:27.925585 | orchestrator | ## Images @ testbed-node-1 2025-06-22 20:21:27.925601 | orchestrator | 2025-06-22 20:21:27.925613 | orchestrator | + echo 2025-06-22 20:21:27.925625 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-22 20:21:27.925637 | orchestrator | + echo 2025-06-22 20:21:27.925648 | orchestrator | + osism container testbed-node-1 images 2025-06-22 20:21:29.965095 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 20:21:29.965211 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 3 weeks ago 319MB 2025-06-22 20:21:29.965234 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 weeks ago 319MB 2025-06-22 20:21:29.965282 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 3 weeks ago 330MB 2025-06-22 20:21:29.965302 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 3 weeks ago 1.59GB 2025-06-22 20:21:29.965322 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 3 weeks ago 1.55GB 2025-06-22 20:21:29.965341 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 3 weeks ago 419MB 2025-06-22 20:21:29.965385 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 weeks ago 747MB 2025-06-22 20:21:29.965458 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 3 weeks ago 376MB 2025-06-22 20:21:29.965476 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 3 weeks ago 327MB 2025-06-22 20:21:29.965491 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 weeks ago 629MB 2025-06-22 20:21:29.965507 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 3 weeks ago 1.01GB 2025-06-22 20:21:29.965521 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 3 weeks ago 591MB 2025-06-22 20:21:29.965537 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 3 weeks ago 354MB 2025-06-22 20:21:29.965552 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 3 weeks ago 352MB 2025-06-22 20:21:29.965568 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 weeks ago 411MB 2025-06-22 20:21:29.965584 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 3 weeks ago 345MB 2025-06-22 20:21:29.965599 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 weeks ago 359MB 2025-06-22 20:21:29.965615 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 3 weeks ago 326MB 2025-06-22 20:21:29.965632 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 3 weeks ago 325MB 2025-06-22 20:21:29.965649 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 3 weeks ago 1.21GB 2025-06-22 20:21:29.965665 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 3 weeks ago 362MB 2025-06-22 20:21:29.965682 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 3 weeks ago 362MB 2025-06-22 20:21:29.965698 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 3 weeks ago 1.15GB 2025-06-22 20:21:29.965715 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 3 weeks ago 1.04GB 2025-06-22 20:21:29.965733 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 3 weeks ago 1.25GB 2025-06-22 20:21:29.965772 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 3 weeks ago 1.2GB 2025-06-22 20:21:29.965806 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 3 weeks ago 1.31GB 2025-06-22 20:21:29.965824 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 3 weeks ago 1.41GB 2025-06-22 20:21:29.965843 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 3 weeks ago 1.41GB 2025-06-22 20:21:29.965880 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 3 weeks ago 1.06GB 2025-06-22 20:21:29.965899 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 3 weeks ago 1.06GB 2025-06-22 20:21:29.965958 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 3 weeks ago 1.05GB 2025-06-22 20:21:29.965973 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 3 weeks ago 1.05GB 2025-06-22 20:21:29.965984 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 3 weeks ago 1.05GB 2025-06-22 20:21:29.965995 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 3 weeks ago 1.05GB 2025-06-22 20:21:29.966006 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 3 weeks ago 1.3GB 2025-06-22 20:21:29.966095 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 3 weeks ago 1.29GB 2025-06-22 20:21:29.966109 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 3 weeks ago 1.42GB 2025-06-22 20:21:29.966120 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 3 weeks ago 1.29GB 2025-06-22 20:21:29.966131 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 3 weeks ago 1.06GB 2025-06-22 20:21:29.966141 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 3 weeks ago 1.06GB 2025-06-22 20:21:29.966152 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 3 weeks ago 1.06GB 2025-06-22 20:21:29.966163 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 3 weeks ago 1.11GB 2025-06-22 20:21:29.966173 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 3 weeks ago 1.13GB 2025-06-22 20:21:29.966184 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 3 weeks ago 1.11GB 2025-06-22 20:21:29.966200 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 3 weeks ago 947MB 2025-06-22 20:21:29.966211 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 3 weeks ago 948MB 2025-06-22 20:21:29.966221 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 3 weeks ago 947MB 2025-06-22 20:21:29.966232 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 3 weeks ago 948MB 2025-06-22 20:21:29.966243 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 6 weeks ago 1.27GB 2025-06-22 20:21:30.193759 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 20:21:30.194641 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 20:21:30.243916 | orchestrator | 2025-06-22 20:21:30.243997 | orchestrator | ## Containers @ testbed-node-2 2025-06-22 20:21:30.244014 | orchestrator | 2025-06-22 20:21:30.244027 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 20:21:30.244039 | orchestrator | + echo 2025-06-22 20:21:30.244052 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-22 20:21:30.244064 | orchestrator | + echo 2025-06-22 20:21:30.244075 | orchestrator | + osism container testbed-node-2 ps 2025-06-22 20:21:32.371149 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 20:21:32.371260 | orchestrator | 0641d64386d5 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-06-22 20:21:32.371299 | orchestrator | ec5624c03348 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-22 20:21:32.371313 | orchestrator | 383ec1d775cb registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-22 20:21:32.371343 | orchestrator | 133b9c513941 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-22 20:21:32.371365 | orchestrator | d63ec8519668 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-22 20:21:32.371377 | orchestrator | dce202e88971 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-22 20:21:32.371388 | orchestrator | 71bab53f5700 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-22 20:21:32.371444 | orchestrator | cb2b688ad4cb registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-22 20:21:32.371457 | orchestrator | 5b100ec3a48a registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-22 20:21:32.371470 | orchestrator | 3d0f8180dd65 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-22 20:21:32.371481 | orchestrator | b71ea923dcac registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-22 20:21:32.371493 | orchestrator | 6acdca8280f2 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-22 20:21:32.371503 | orchestrator | 6d2fa5d2e1b1 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-22 20:21:32.371514 | orchestrator | eb2c3fdb0c36 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-06-22 20:21:32.371526 | orchestrator | 8e506859e376 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-06-22 20:21:32.371537 | orchestrator | 7ed4116799ed registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-06-22 20:21:32.371548 | orchestrator | b704515777c9 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-06-22 20:21:32.371559 | orchestrator | 994b28b9ad6d registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-06-22 20:21:32.371589 | orchestrator | 057e8a88bfa7 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-06-22 20:21:32.371626 | orchestrator | 1d6e3d1aae21 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2025-06-22 20:21:32.371638 | orchestrator | 22d82fa62489 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2025-06-22 20:21:32.371649 | orchestrator | f3cb5b747990 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-06-22 20:21:32.371660 | orchestrator | e08804116544 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2025-06-22 20:21:32.371671 | orchestrator | 7b9de8ac778b registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2025-06-22 20:21:32.371682 | orchestrator | 4f8db1a62b67 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-06-22 20:21:32.371693 | orchestrator | 3df97c3ca37e registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-06-22 20:21:32.371704 | orchestrator | 96764d98a3bf registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-06-22 20:21:32.371715 | orchestrator | fc2ff787d45f registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-22 20:21:32.371726 | orchestrator | b8c9d443ac41 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-06-22 20:21:32.371737 | orchestrator | 2593052fb548 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-06-22 20:21:32.371748 | orchestrator | ff56a9a24d29 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-06-22 20:21:32.371759 | orchestrator | 2cc7e091ea34 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-06-22 20:21:32.371770 | orchestrator | 2d92a15d5bf4 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-06-22 20:21:32.371781 | orchestrator | ad8ed7f6d7db registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-06-22 20:21:32.371791 | orchestrator | a95cdcc0b595 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-06-22 20:21:32.371802 | orchestrator | a7678ba89328 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-06-22 20:21:32.371813 | orchestrator | ef625d65df94 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-06-22 20:21:32.371831 | orchestrator | e3b2ee50209f registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-06-22 20:21:32.371842 | orchestrator | 17a52e05dda3 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2025-06-22 20:21:32.371853 | orchestrator | 2f4d206e47aa registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2025-06-22 20:21:32.371870 | orchestrator | 185bf095e6b6 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-06-22 20:21:32.371882 | orchestrator | c8ab2ebc243b registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-22 20:21:32.371893 | orchestrator | 4c12fe113e29 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-22 20:21:32.371904 | orchestrator | d7629d5bce40 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-2 2025-06-22 20:21:32.371915 | orchestrator | 6aa5df6732f0 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-06-22 20:21:32.371926 | orchestrator | a56183abfd43 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-22 20:21:32.371937 | orchestrator | 9adefb6610df registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-22 20:21:32.371948 | orchestrator | 542f1efd2886 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-22 20:21:32.371959 | orchestrator | 2470ad2bad8a registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-22 20:21:32.371970 | orchestrator | fc0c462137a9 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-22 20:21:32.371981 | orchestrator | 8b6e0e637175 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-22 20:21:32.371997 | orchestrator | 563249938e01 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-22 20:21:32.599108 | orchestrator | 2025-06-22 20:21:32.599214 | orchestrator | ## Images @ testbed-node-2 2025-06-22 20:21:32.599230 | orchestrator | 2025-06-22 20:21:32.599242 | orchestrator | + echo 2025-06-22 20:21:32.599253 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-22 20:21:32.599265 | orchestrator | + echo 2025-06-22 20:21:32.599277 | orchestrator | + osism container testbed-node-2 images 2025-06-22 20:21:34.696071 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 20:21:34.696211 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 3 weeks ago 319MB 2025-06-22 20:21:34.696232 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 weeks ago 319MB 2025-06-22 20:21:34.696279 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 3 weeks ago 330MB 2025-06-22 20:21:34.696298 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 3 weeks ago 1.59GB 2025-06-22 20:21:34.696316 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 3 weeks ago 1.55GB 2025-06-22 20:21:34.696334 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 3 weeks ago 419MB 2025-06-22 20:21:34.696369 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 weeks ago 747MB 2025-06-22 20:21:34.696388 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 3 weeks ago 327MB 2025-06-22 20:21:34.696484 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 3 weeks ago 376MB 2025-06-22 20:21:34.696499 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 weeks ago 629MB 2025-06-22 20:21:34.696510 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 3 weeks ago 1.01GB 2025-06-22 20:21:34.696521 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 3 weeks ago 591MB 2025-06-22 20:21:34.696532 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 3 weeks ago 354MB 2025-06-22 20:21:34.696542 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 weeks ago 411MB 2025-06-22 20:21:34.696553 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 3 weeks ago 352MB 2025-06-22 20:21:34.696564 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 3 weeks ago 345MB 2025-06-22 20:21:34.696574 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 weeks ago 359MB 2025-06-22 20:21:34.696586 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 3 weeks ago 326MB 2025-06-22 20:21:34.696599 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 3 weeks ago 325MB 2025-06-22 20:21:34.696612 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 3 weeks ago 1.21GB 2025-06-22 20:21:34.696624 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 3 weeks ago 362MB 2025-06-22 20:21:34.696637 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 3 weeks ago 362MB 2025-06-22 20:21:34.696649 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 3 weeks ago 1.15GB 2025-06-22 20:21:34.696661 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 3 weeks ago 1.04GB 2025-06-22 20:21:34.696673 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 3 weeks ago 1.25GB 2025-06-22 20:21:34.696685 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 3 weeks ago 1.2GB 2025-06-22 20:21:34.696698 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 3 weeks ago 1.31GB 2025-06-22 20:21:34.696710 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 3 weeks ago 1.41GB 2025-06-22 20:21:34.696734 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 3 weeks ago 1.41GB 2025-06-22 20:21:34.696746 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 3 weeks ago 1.06GB 2025-06-22 20:21:34.696758 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 3 weeks ago 1.06GB 2025-06-22 20:21:34.696791 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 3 weeks ago 1.05GB 2025-06-22 20:21:34.696804 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 3 weeks ago 1.05GB 2025-06-22 20:21:34.696816 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 3 weeks ago 1.05GB 2025-06-22 20:21:34.696828 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 3 weeks ago 1.05GB 2025-06-22 20:21:34.696840 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 3 weeks ago 1.3GB 2025-06-22 20:21:34.696852 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 3 weeks ago 1.29GB 2025-06-22 20:21:34.696864 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 3 weeks ago 1.42GB 2025-06-22 20:21:34.696877 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 3 weeks ago 1.29GB 2025-06-22 20:21:34.696889 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 3 weeks ago 1.06GB 2025-06-22 20:21:34.696902 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 3 weeks ago 1.06GB 2025-06-22 20:21:34.696915 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 3 weeks ago 1.06GB 2025-06-22 20:21:34.696927 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 3 weeks ago 1.11GB 2025-06-22 20:21:34.696940 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 3 weeks ago 1.13GB 2025-06-22 20:21:34.696952 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 3 weeks ago 1.11GB 2025-06-22 20:21:34.696965 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 3 weeks ago 947MB 2025-06-22 20:21:34.696975 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 3 weeks ago 948MB 2025-06-22 20:21:34.696986 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 3 weeks ago 947MB 2025-06-22 20:21:34.696996 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 3 weeks ago 948MB 2025-06-22 20:21:34.697007 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 6 weeks ago 1.27GB 2025-06-22 20:21:34.993147 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-22 20:21:35.000757 | orchestrator | + set -e 2025-06-22 20:21:35.000826 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 20:21:35.002270 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 20:21:35.002297 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 20:21:35.002309 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 20:21:35.002320 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 20:21:35.002332 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 20:21:35.002378 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 20:21:35.002390 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 20:21:35.002438 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 20:21:35.002472 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 20:21:35.002521 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 20:21:35.002533 | orchestrator | ++ export ARA=false 2025-06-22 20:21:35.002545 | orchestrator | ++ ARA=false 2025-06-22 20:21:35.002556 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 20:21:35.002567 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 20:21:35.002578 | orchestrator | ++ export TEMPEST=false 2025-06-22 20:21:35.002589 | orchestrator | ++ TEMPEST=false 2025-06-22 20:21:35.002601 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 20:21:35.002612 | orchestrator | ++ IS_ZUUL=true 2025-06-22 20:21:35.002623 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.98 2025-06-22 20:21:35.002635 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.98 2025-06-22 20:21:35.002662 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 20:21:35.002673 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 20:21:35.002684 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 20:21:35.002695 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 20:21:35.002706 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 20:21:35.002717 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 20:21:35.002729 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 20:21:35.002740 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 20:21:35.002814 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-22 20:21:35.002828 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-22 20:21:35.013732 | orchestrator | + set -e 2025-06-22 20:21:35.013824 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 20:21:35.013838 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 20:21:35.013851 | orchestrator | ++ INTERACTIVE=false 2025-06-22 20:21:35.013862 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 20:21:35.013873 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 20:21:35.013938 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-22 20:21:35.015036 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-22 20:21:35.021178 | orchestrator | 2025-06-22 20:21:35.021231 | orchestrator | # Ceph status 2025-06-22 20:21:35.021247 | orchestrator | 2025-06-22 20:21:35.021261 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 20:21:35.021273 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 20:21:35.021287 | orchestrator | + echo 2025-06-22 20:21:35.021299 | orchestrator | + echo '# Ceph status' 2025-06-22 20:21:35.021311 | orchestrator | + echo 2025-06-22 20:21:35.021328 | orchestrator | + ceph -s 2025-06-22 20:21:35.648075 | orchestrator | cluster: 2025-06-22 20:21:35.648193 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-22 20:21:35.648210 | orchestrator | health: HEALTH_OK 2025-06-22 20:21:35.648225 | orchestrator | 2025-06-22 20:21:35.648237 | orchestrator | services: 2025-06-22 20:21:35.648249 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 29m) 2025-06-22 20:21:35.648262 | orchestrator | mgr: testbed-node-1(active, since 16m), standbys: testbed-node-2, testbed-node-0 2025-06-22 20:21:35.648274 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-22 20:21:35.648285 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 26m) 2025-06-22 20:21:35.648297 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-22 20:21:35.648308 | orchestrator | 2025-06-22 20:21:35.648319 | orchestrator | data: 2025-06-22 20:21:35.648342 | orchestrator | volumes: 1/1 healthy 2025-06-22 20:21:35.648354 | orchestrator | pools: 14 pools, 401 pgs 2025-06-22 20:21:35.648365 | orchestrator | objects: 524 objects, 2.2 GiB 2025-06-22 20:21:35.648377 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-22 20:21:35.648388 | orchestrator | pgs: 401 active+clean 2025-06-22 20:21:35.648449 | orchestrator | 2025-06-22 20:21:35.694375 | orchestrator | 2025-06-22 20:21:35.694515 | orchestrator | # Ceph versions 2025-06-22 20:21:35.694536 | orchestrator | 2025-06-22 20:21:35.694551 | orchestrator | + echo 2025-06-22 20:21:35.694566 | orchestrator | + echo '# Ceph versions' 2025-06-22 20:21:35.694581 | orchestrator | + echo 2025-06-22 20:21:35.694596 | orchestrator | + ceph versions 2025-06-22 20:21:36.285648 | orchestrator | { 2025-06-22 20:21:36.285768 | orchestrator | "mon": { 2025-06-22 20:21:36.285786 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 20:21:36.285799 | orchestrator | }, 2025-06-22 20:21:36.285811 | orchestrator | "mgr": { 2025-06-22 20:21:36.285822 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 20:21:36.285840 | orchestrator | }, 2025-06-22 20:21:36.285858 | orchestrator | "osd": { 2025-06-22 20:21:36.285878 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-22 20:21:36.285941 | orchestrator | }, 2025-06-22 20:21:36.285961 | orchestrator | "mds": { 2025-06-22 20:21:36.285981 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 20:21:36.286000 | orchestrator | }, 2025-06-22 20:21:36.286071 | orchestrator | "rgw": { 2025-06-22 20:21:36.286086 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 20:21:36.286107 | orchestrator | }, 2025-06-22 20:21:36.286127 | orchestrator | "overall": { 2025-06-22 20:21:36.286148 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-22 20:21:36.286170 | orchestrator | } 2025-06-22 20:21:36.286190 | orchestrator | } 2025-06-22 20:21:36.338321 | orchestrator | 2025-06-22 20:21:36.338443 | orchestrator | # Ceph OSD tree 2025-06-22 20:21:36.338458 | orchestrator | 2025-06-22 20:21:36.338470 | orchestrator | + echo 2025-06-22 20:21:36.338482 | orchestrator | + echo '# Ceph OSD tree' 2025-06-22 20:21:36.338513 | orchestrator | + echo 2025-06-22 20:21:36.338525 | orchestrator | + ceph osd df tree 2025-06-22 20:21:36.877368 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-22 20:21:36.877557 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-06-22 20:21:36.877573 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-22 20:21:36.877585 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.27 1.23 200 up osd.0 2025-06-22 20:21:36.877596 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 932 MiB 859 MiB 1 KiB 74 MiB 19 GiB 4.56 0.77 190 up osd.4 2025-06-22 20:21:36.877607 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-22 20:21:36.877617 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 933 MiB 859 MiB 1 KiB 74 MiB 19 GiB 4.56 0.77 176 up osd.1 2025-06-22 20:21:36.877628 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.27 1.23 216 up osd.3 2025-06-22 20:21:36.877639 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-22 20:21:36.877650 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.84 1.16 191 up osd.2 2025-06-22 20:21:36.877660 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1021 MiB 947 MiB 1 KiB 74 MiB 19 GiB 4.99 0.84 197 up osd.5 2025-06-22 20:21:36.877671 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-06-22 20:21:36.877683 | orchestrator | MIN/MAX VAR: 0.77/1.23 STDDEV: 1.23 2025-06-22 20:21:36.922700 | orchestrator | 2025-06-22 20:21:36.922784 | orchestrator | # Ceph monitor status 2025-06-22 20:21:36.922797 | orchestrator | 2025-06-22 20:21:36.922809 | orchestrator | + echo 2025-06-22 20:21:36.922820 | orchestrator | + echo '# Ceph monitor status' 2025-06-22 20:21:36.922832 | orchestrator | + echo 2025-06-22 20:21:36.922843 | orchestrator | + ceph mon stat 2025-06-22 20:21:37.494469 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-22 20:21:37.537011 | orchestrator | 2025-06-22 20:21:37.537102 | orchestrator | # Ceph quorum status 2025-06-22 20:21:37.537117 | orchestrator | 2025-06-22 20:21:37.537129 | orchestrator | + echo 2025-06-22 20:21:37.537140 | orchestrator | + echo '# Ceph quorum status' 2025-06-22 20:21:37.537152 | orchestrator | + echo 2025-06-22 20:21:37.537819 | orchestrator | + ceph quorum_status 2025-06-22 20:21:37.537842 | orchestrator | + jq 2025-06-22 20:21:38.190534 | orchestrator | { 2025-06-22 20:21:38.190829 | orchestrator | "election_epoch": 6, 2025-06-22 20:21:38.190877 | orchestrator | "quorum": [ 2025-06-22 20:21:38.190891 | orchestrator | 0, 2025-06-22 20:21:38.190904 | orchestrator | 1, 2025-06-22 20:21:38.190915 | orchestrator | 2 2025-06-22 20:21:38.190927 | orchestrator | ], 2025-06-22 20:21:38.190939 | orchestrator | "quorum_names": [ 2025-06-22 20:21:38.190951 | orchestrator | "testbed-node-0", 2025-06-22 20:21:38.190962 | orchestrator | "testbed-node-1", 2025-06-22 20:21:38.190974 | orchestrator | "testbed-node-2" 2025-06-22 20:21:38.190986 | orchestrator | ], 2025-06-22 20:21:38.190998 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-22 20:21:38.191011 | orchestrator | "quorum_age": 1755, 2025-06-22 20:21:38.191022 | orchestrator | "features": { 2025-06-22 20:21:38.191034 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-22 20:21:38.191046 | orchestrator | "quorum_mon": [ 2025-06-22 20:21:38.191057 | orchestrator | "kraken", 2025-06-22 20:21:38.191069 | orchestrator | "luminous", 2025-06-22 20:21:38.191081 | orchestrator | "mimic", 2025-06-22 20:21:38.191092 | orchestrator | "osdmap-prune", 2025-06-22 20:21:38.191104 | orchestrator | "nautilus", 2025-06-22 20:21:38.191115 | orchestrator | "octopus", 2025-06-22 20:21:38.191127 | orchestrator | "pacific", 2025-06-22 20:21:38.191139 | orchestrator | "elector-pinging", 2025-06-22 20:21:38.191150 | orchestrator | "quincy", 2025-06-22 20:21:38.191162 | orchestrator | "reef" 2025-06-22 20:21:38.191173 | orchestrator | ] 2025-06-22 20:21:38.191185 | orchestrator | }, 2025-06-22 20:21:38.191196 | orchestrator | "monmap": { 2025-06-22 20:21:38.191208 | orchestrator | "epoch": 1, 2025-06-22 20:21:38.191220 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-22 20:21:38.191232 | orchestrator | "modified": "2025-06-22T19:52:00.400255Z", 2025-06-22 20:21:38.191244 | orchestrator | "created": "2025-06-22T19:52:00.400255Z", 2025-06-22 20:21:38.191256 | orchestrator | "min_mon_release": 18, 2025-06-22 20:21:38.191267 | orchestrator | "min_mon_release_name": "reef", 2025-06-22 20:21:38.191279 | orchestrator | "election_strategy": 1, 2025-06-22 20:21:38.191290 | orchestrator | "disallowed_leaders: ": "", 2025-06-22 20:21:38.191302 | orchestrator | "stretch_mode": false, 2025-06-22 20:21:38.191313 | orchestrator | "tiebreaker_mon": "", 2025-06-22 20:21:38.191325 | orchestrator | "removed_ranks: ": "", 2025-06-22 20:21:38.191337 | orchestrator | "features": { 2025-06-22 20:21:38.191348 | orchestrator | "persistent": [ 2025-06-22 20:21:38.191360 | orchestrator | "kraken", 2025-06-22 20:21:38.191374 | orchestrator | "luminous", 2025-06-22 20:21:38.191387 | orchestrator | "mimic", 2025-06-22 20:21:38.191428 | orchestrator | "osdmap-prune", 2025-06-22 20:21:38.191442 | orchestrator | "nautilus", 2025-06-22 20:21:38.191454 | orchestrator | "octopus", 2025-06-22 20:21:38.191466 | orchestrator | "pacific", 2025-06-22 20:21:38.191478 | orchestrator | "elector-pinging", 2025-06-22 20:21:38.191491 | orchestrator | "quincy", 2025-06-22 20:21:38.191504 | orchestrator | "reef" 2025-06-22 20:21:38.191516 | orchestrator | ], 2025-06-22 20:21:38.191528 | orchestrator | "optional": [] 2025-06-22 20:21:38.191540 | orchestrator | }, 2025-06-22 20:21:38.191553 | orchestrator | "mons": [ 2025-06-22 20:21:38.191565 | orchestrator | { 2025-06-22 20:21:38.191578 | orchestrator | "rank": 0, 2025-06-22 20:21:38.191590 | orchestrator | "name": "testbed-node-0", 2025-06-22 20:21:38.191603 | orchestrator | "public_addrs": { 2025-06-22 20:21:38.191615 | orchestrator | "addrvec": [ 2025-06-22 20:21:38.191628 | orchestrator | { 2025-06-22 20:21:38.191640 | orchestrator | "type": "v2", 2025-06-22 20:21:38.191653 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-22 20:21:38.191665 | orchestrator | "nonce": 0 2025-06-22 20:21:38.191678 | orchestrator | }, 2025-06-22 20:21:38.191690 | orchestrator | { 2025-06-22 20:21:38.191702 | orchestrator | "type": "v1", 2025-06-22 20:21:38.191714 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-22 20:21:38.191725 | orchestrator | "nonce": 0 2025-06-22 20:21:38.191736 | orchestrator | } 2025-06-22 20:21:38.191746 | orchestrator | ] 2025-06-22 20:21:38.191757 | orchestrator | }, 2025-06-22 20:21:38.191768 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-22 20:21:38.191779 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-22 20:21:38.191789 | orchestrator | "priority": 0, 2025-06-22 20:21:38.191800 | orchestrator | "weight": 0, 2025-06-22 20:21:38.191810 | orchestrator | "crush_location": "{}" 2025-06-22 20:21:38.191886 | orchestrator | }, 2025-06-22 20:21:38.191898 | orchestrator | { 2025-06-22 20:21:38.191909 | orchestrator | "rank": 1, 2025-06-22 20:21:38.191920 | orchestrator | "name": "testbed-node-1", 2025-06-22 20:21:38.191940 | orchestrator | "public_addrs": { 2025-06-22 20:21:38.191951 | orchestrator | "addrvec": [ 2025-06-22 20:21:38.191996 | orchestrator | { 2025-06-22 20:21:38.192008 | orchestrator | "type": "v2", 2025-06-22 20:21:38.192019 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-22 20:21:38.192030 | orchestrator | "nonce": 0 2025-06-22 20:21:38.192041 | orchestrator | }, 2025-06-22 20:21:38.192052 | orchestrator | { 2025-06-22 20:21:38.192063 | orchestrator | "type": "v1", 2025-06-22 20:21:38.192074 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-22 20:21:38.192084 | orchestrator | "nonce": 0 2025-06-22 20:21:38.192095 | orchestrator | } 2025-06-22 20:21:38.192106 | orchestrator | ] 2025-06-22 20:21:38.192116 | orchestrator | }, 2025-06-22 20:21:38.192127 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-22 20:21:38.192138 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-22 20:21:38.192149 | orchestrator | "priority": 0, 2025-06-22 20:21:38.192159 | orchestrator | "weight": 0, 2025-06-22 20:21:38.192170 | orchestrator | "crush_location": "{}" 2025-06-22 20:21:38.192181 | orchestrator | }, 2025-06-22 20:21:38.192191 | orchestrator | { 2025-06-22 20:21:38.192202 | orchestrator | "rank": 2, 2025-06-22 20:21:38.192213 | orchestrator | "name": "testbed-node-2", 2025-06-22 20:21:38.192224 | orchestrator | "public_addrs": { 2025-06-22 20:21:38.192234 | orchestrator | "addrvec": [ 2025-06-22 20:21:38.192245 | orchestrator | { 2025-06-22 20:21:38.192256 | orchestrator | "type": "v2", 2025-06-22 20:21:38.192267 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-22 20:21:38.192278 | orchestrator | "nonce": 0 2025-06-22 20:21:38.192288 | orchestrator | }, 2025-06-22 20:21:38.192299 | orchestrator | { 2025-06-22 20:21:38.192310 | orchestrator | "type": "v1", 2025-06-22 20:21:38.192321 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-22 20:21:38.192331 | orchestrator | "nonce": 0 2025-06-22 20:21:38.192342 | orchestrator | } 2025-06-22 20:21:38.192353 | orchestrator | ] 2025-06-22 20:21:38.192364 | orchestrator | }, 2025-06-22 20:21:38.192374 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-22 20:21:38.192385 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-22 20:21:38.192413 | orchestrator | "priority": 0, 2025-06-22 20:21:38.192425 | orchestrator | "weight": 0, 2025-06-22 20:21:38.192436 | orchestrator | "crush_location": "{}" 2025-06-22 20:21:38.192447 | orchestrator | } 2025-06-22 20:21:38.192458 | orchestrator | ] 2025-06-22 20:21:38.192468 | orchestrator | } 2025-06-22 20:21:38.192510 | orchestrator | } 2025-06-22 20:21:38.192536 | orchestrator | 2025-06-22 20:21:38.192552 | orchestrator | + echo 2025-06-22 20:21:38.192572 | orchestrator | # Ceph free space status 2025-06-22 20:21:38.192583 | orchestrator | 2025-06-22 20:21:38.192594 | orchestrator | + echo '# Ceph free space status' 2025-06-22 20:21:38.192605 | orchestrator | + echo 2025-06-22 20:21:38.192616 | orchestrator | + ceph df 2025-06-22 20:21:38.770749 | orchestrator | --- RAW STORAGE --- 2025-06-22 20:21:38.770844 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-22 20:21:38.770871 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-22 20:21:38.770884 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-22 20:21:38.770896 | orchestrator | 2025-06-22 20:21:38.770908 | orchestrator | --- POOLS --- 2025-06-22 20:21:38.770920 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-22 20:21:38.770931 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-06-22 20:21:38.770942 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-22 20:21:38.770953 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-22 20:21:38.770963 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-22 20:21:38.770974 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-22 20:21:38.770985 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-22 20:21:38.770995 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-22 20:21:38.771028 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-22 20:21:38.771047 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-06-22 20:21:38.771093 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 20:21:38.771112 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 20:21:38.771128 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2025-06-22 20:21:38.771139 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 20:21:38.771150 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 20:21:38.815270 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 20:21:38.870527 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 20:21:38.870601 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-22 20:21:38.870615 | orchestrator | + osism apply facts 2025-06-22 20:21:40.520321 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:21:40.520431 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:21:40.520445 | orchestrator | Registering Redlock._release_script 2025-06-22 20:21:40.580327 | orchestrator | 2025-06-22 20:21:40 | INFO  | Task f7090b56-e14d-428d-8adf-f5cdcb1bb30c (facts) was prepared for execution. 2025-06-22 20:21:40.580514 | orchestrator | 2025-06-22 20:21:40 | INFO  | It takes a moment until task f7090b56-e14d-428d-8adf-f5cdcb1bb30c (facts) has been started and output is visible here. 2025-06-22 20:21:44.375709 | orchestrator | 2025-06-22 20:21:44.379380 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-22 20:21:44.379453 | orchestrator | 2025-06-22 20:21:44.380008 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 20:21:44.380743 | orchestrator | Sunday 22 June 2025 20:21:44 +0000 (0:00:00.204) 0:00:00.204 *********** 2025-06-22 20:21:45.705981 | orchestrator | ok: [testbed-manager] 2025-06-22 20:21:45.706965 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:21:45.709154 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:21:45.710096 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:21:45.710897 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:21:45.712364 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:21:45.714154 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:21:45.715134 | orchestrator | 2025-06-22 20:21:45.716486 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 20:21:45.717761 | orchestrator | Sunday 22 June 2025 20:21:45 +0000 (0:00:01.325) 0:00:01.530 *********** 2025-06-22 20:21:45.862217 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:21:45.928108 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:21:45.997371 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:21:46.072072 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:21:46.141254 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:21:46.762799 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:21:46.766779 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:21:46.767543 | orchestrator | 2025-06-22 20:21:46.768874 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 20:21:46.769531 | orchestrator | 2025-06-22 20:21:46.770626 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 20:21:46.771345 | orchestrator | Sunday 22 June 2025 20:21:46 +0000 (0:00:01.062) 0:00:02.593 *********** 2025-06-22 20:21:52.163501 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:21:52.163624 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:21:52.164192 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:21:52.165064 | orchestrator | ok: [testbed-manager] 2025-06-22 20:21:52.165881 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:21:52.169638 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:21:52.170654 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:21:52.171134 | orchestrator | 2025-06-22 20:21:52.172226 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 20:21:52.172835 | orchestrator | 2025-06-22 20:21:52.173801 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 20:21:52.173958 | orchestrator | Sunday 22 June 2025 20:21:52 +0000 (0:00:05.400) 0:00:07.994 *********** 2025-06-22 20:21:52.338101 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:21:52.423274 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:21:52.507931 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:21:52.588774 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:21:52.671890 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:21:52.717548 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:21:52.717639 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:21:52.718161 | orchestrator | 2025-06-22 20:21:52.718819 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:21:52.718984 | orchestrator | 2025-06-22 20:21:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 20:21:52.719339 | orchestrator | 2025-06-22 20:21:52 | INFO  | Please wait and do not abort execution. 2025-06-22 20:21:52.720097 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:21:52.722567 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:21:52.722968 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:21:52.723579 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:21:52.724428 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:21:52.724924 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:21:52.725681 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:21:52.725854 | orchestrator | 2025-06-22 20:21:52.726180 | orchestrator | 2025-06-22 20:21:52.726629 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:21:52.726998 | orchestrator | Sunday 22 June 2025 20:21:52 +0000 (0:00:00.555) 0:00:08.549 *********** 2025-06-22 20:21:52.727351 | orchestrator | =============================================================================== 2025-06-22 20:21:52.727843 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.40s 2025-06-22 20:21:52.728184 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.33s 2025-06-22 20:21:52.730841 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2025-06-22 20:21:52.730875 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-06-22 20:21:53.356653 | orchestrator | + osism validate ceph-mons 2025-06-22 20:21:55.048882 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:21:55.048984 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:21:55.048999 | orchestrator | Registering Redlock._release_script 2025-06-22 20:22:14.408635 | orchestrator | 2025-06-22 20:22:14.408748 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-22 20:22:14.408759 | orchestrator | 2025-06-22 20:22:14.408765 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-22 20:22:14.408771 | orchestrator | Sunday 22 June 2025 20:21:59 +0000 (0:00:00.468) 0:00:00.468 *********** 2025-06-22 20:22:14.408777 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:14.408783 | orchestrator | 2025-06-22 20:22:14.408788 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-22 20:22:14.408794 | orchestrator | Sunday 22 June 2025 20:21:59 +0000 (0:00:00.592) 0:00:01.061 *********** 2025-06-22 20:22:14.408800 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:14.408852 | orchestrator | 2025-06-22 20:22:14.408858 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-22 20:22:14.408864 | orchestrator | Sunday 22 June 2025 20:22:00 +0000 (0:00:00.801) 0:00:01.862 *********** 2025-06-22 20:22:14.408870 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.408877 | orchestrator | 2025-06-22 20:22:14.408883 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-22 20:22:14.408888 | orchestrator | Sunday 22 June 2025 20:22:00 +0000 (0:00:00.225) 0:00:02.088 *********** 2025-06-22 20:22:14.408894 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.408899 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:22:14.408905 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:22:14.408910 | orchestrator | 2025-06-22 20:22:14.408916 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-22 20:22:14.408921 | orchestrator | Sunday 22 June 2025 20:22:01 +0000 (0:00:00.299) 0:00:02.388 *********** 2025-06-22 20:22:14.408926 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:22:14.408932 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:22:14.408937 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.408942 | orchestrator | 2025-06-22 20:22:14.408948 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-22 20:22:14.408953 | orchestrator | Sunday 22 June 2025 20:22:02 +0000 (0:00:00.968) 0:00:03.356 *********** 2025-06-22 20:22:14.408959 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.408964 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:22:14.408969 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:22:14.408975 | orchestrator | 2025-06-22 20:22:14.408980 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-22 20:22:14.408985 | orchestrator | Sunday 22 June 2025 20:22:02 +0000 (0:00:00.287) 0:00:03.644 *********** 2025-06-22 20:22:14.408991 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.408996 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:22:14.409002 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:22:14.409007 | orchestrator | 2025-06-22 20:22:14.409013 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:22:14.409018 | orchestrator | Sunday 22 June 2025 20:22:03 +0000 (0:00:00.502) 0:00:04.147 *********** 2025-06-22 20:22:14.409024 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.409029 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:22:14.409034 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:22:14.409039 | orchestrator | 2025-06-22 20:22:14.409045 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-22 20:22:14.409050 | orchestrator | Sunday 22 June 2025 20:22:03 +0000 (0:00:00.300) 0:00:04.448 *********** 2025-06-22 20:22:14.409056 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.409061 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:22:14.409066 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:22:14.409072 | orchestrator | 2025-06-22 20:22:14.409077 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-22 20:22:14.409082 | orchestrator | Sunday 22 June 2025 20:22:03 +0000 (0:00:00.288) 0:00:04.736 *********** 2025-06-22 20:22:14.409088 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.409095 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:22:14.409100 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:22:14.409106 | orchestrator | 2025-06-22 20:22:14.409111 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:22:14.409117 | orchestrator | Sunday 22 June 2025 20:22:03 +0000 (0:00:00.300) 0:00:05.037 *********** 2025-06-22 20:22:14.409122 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.409127 | orchestrator | 2025-06-22 20:22:14.409133 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:22:14.409139 | orchestrator | Sunday 22 June 2025 20:22:04 +0000 (0:00:00.688) 0:00:05.726 *********** 2025-06-22 20:22:14.409146 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.409151 | orchestrator | 2025-06-22 20:22:14.409157 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:22:14.409170 | orchestrator | Sunday 22 June 2025 20:22:04 +0000 (0:00:00.261) 0:00:05.987 *********** 2025-06-22 20:22:14.409176 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.409181 | orchestrator | 2025-06-22 20:22:14.409187 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:14.409192 | orchestrator | Sunday 22 June 2025 20:22:05 +0000 (0:00:00.247) 0:00:06.234 *********** 2025-06-22 20:22:14.409198 | orchestrator | 2025-06-22 20:22:14.409204 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:14.409209 | orchestrator | Sunday 22 June 2025 20:22:05 +0000 (0:00:00.066) 0:00:06.301 *********** 2025-06-22 20:22:14.409214 | orchestrator | 2025-06-22 20:22:14.409220 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:14.409226 | orchestrator | Sunday 22 June 2025 20:22:05 +0000 (0:00:00.069) 0:00:06.370 *********** 2025-06-22 20:22:14.409232 | orchestrator | 2025-06-22 20:22:14.409237 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:22:14.409242 | orchestrator | Sunday 22 June 2025 20:22:05 +0000 (0:00:00.077) 0:00:06.448 *********** 2025-06-22 20:22:14.409249 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.409254 | orchestrator | 2025-06-22 20:22:14.409260 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-22 20:22:14.409267 | orchestrator | Sunday 22 June 2025 20:22:05 +0000 (0:00:00.267) 0:00:06.715 *********** 2025-06-22 20:22:14.409274 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.409279 | orchestrator | 2025-06-22 20:22:14.409315 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-22 20:22:14.409322 | orchestrator | Sunday 22 June 2025 20:22:05 +0000 (0:00:00.239) 0:00:06.954 *********** 2025-06-22 20:22:14.409327 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.409333 | orchestrator | 2025-06-22 20:22:14.409339 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-22 20:22:14.409345 | orchestrator | Sunday 22 June 2025 20:22:05 +0000 (0:00:00.136) 0:00:07.091 *********** 2025-06-22 20:22:14.409351 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:22:14.409357 | orchestrator | 2025-06-22 20:22:14.409362 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-22 20:22:14.409369 | orchestrator | Sunday 22 June 2025 20:22:07 +0000 (0:00:01.613) 0:00:08.704 *********** 2025-06-22 20:22:14.409374 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.409379 | orchestrator | 2025-06-22 20:22:14.409385 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-22 20:22:14.409391 | orchestrator | Sunday 22 June 2025 20:22:07 +0000 (0:00:00.304) 0:00:09.009 *********** 2025-06-22 20:22:14.409397 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.409402 | orchestrator | 2025-06-22 20:22:14.409408 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-22 20:22:14.409432 | orchestrator | Sunday 22 June 2025 20:22:08 +0000 (0:00:00.352) 0:00:09.362 *********** 2025-06-22 20:22:14.409439 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.409446 | orchestrator | 2025-06-22 20:22:14.409452 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-22 20:22:14.409458 | orchestrator | Sunday 22 June 2025 20:22:08 +0000 (0:00:00.308) 0:00:09.671 *********** 2025-06-22 20:22:14.409464 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.409470 | orchestrator | 2025-06-22 20:22:14.409476 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-22 20:22:14.409485 | orchestrator | Sunday 22 June 2025 20:22:08 +0000 (0:00:00.292) 0:00:09.964 *********** 2025-06-22 20:22:14.409491 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.409497 | orchestrator | 2025-06-22 20:22:14.409502 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-22 20:22:14.409508 | orchestrator | Sunday 22 June 2025 20:22:08 +0000 (0:00:00.119) 0:00:10.083 *********** 2025-06-22 20:22:14.409513 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.409527 | orchestrator | 2025-06-22 20:22:14.409532 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-22 20:22:14.409538 | orchestrator | Sunday 22 June 2025 20:22:09 +0000 (0:00:00.131) 0:00:10.215 *********** 2025-06-22 20:22:14.409543 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.409549 | orchestrator | 2025-06-22 20:22:14.409555 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-22 20:22:14.409560 | orchestrator | Sunday 22 June 2025 20:22:09 +0000 (0:00:00.132) 0:00:10.348 *********** 2025-06-22 20:22:14.409566 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:22:14.409571 | orchestrator | 2025-06-22 20:22:14.409576 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-22 20:22:14.409581 | orchestrator | Sunday 22 June 2025 20:22:10 +0000 (0:00:01.285) 0:00:11.633 *********** 2025-06-22 20:22:14.409587 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.409592 | orchestrator | 2025-06-22 20:22:14.409597 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-22 20:22:14.409603 | orchestrator | Sunday 22 June 2025 20:22:10 +0000 (0:00:00.297) 0:00:11.931 *********** 2025-06-22 20:22:14.409608 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.409613 | orchestrator | 2025-06-22 20:22:14.409618 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-22 20:22:14.409623 | orchestrator | Sunday 22 June 2025 20:22:10 +0000 (0:00:00.139) 0:00:12.070 *********** 2025-06-22 20:22:14.409628 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:14.409634 | orchestrator | 2025-06-22 20:22:14.409640 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-22 20:22:14.409646 | orchestrator | Sunday 22 June 2025 20:22:11 +0000 (0:00:00.154) 0:00:12.225 *********** 2025-06-22 20:22:14.409651 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.409657 | orchestrator | 2025-06-22 20:22:14.409701 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-22 20:22:14.409707 | orchestrator | Sunday 22 June 2025 20:22:11 +0000 (0:00:00.134) 0:00:12.359 *********** 2025-06-22 20:22:14.409712 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.409718 | orchestrator | 2025-06-22 20:22:14.409723 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-22 20:22:14.409729 | orchestrator | Sunday 22 June 2025 20:22:11 +0000 (0:00:00.326) 0:00:12.686 *********** 2025-06-22 20:22:14.409734 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:14.409740 | orchestrator | 2025-06-22 20:22:14.409745 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-22 20:22:14.409751 | orchestrator | Sunday 22 June 2025 20:22:11 +0000 (0:00:00.274) 0:00:12.961 *********** 2025-06-22 20:22:14.409756 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:14.409762 | orchestrator | 2025-06-22 20:22:14.409767 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:22:14.409772 | orchestrator | Sunday 22 June 2025 20:22:12 +0000 (0:00:00.245) 0:00:13.206 *********** 2025-06-22 20:22:14.409778 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:14.409784 | orchestrator | 2025-06-22 20:22:14.409789 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:22:14.409795 | orchestrator | Sunday 22 June 2025 20:22:13 +0000 (0:00:01.535) 0:00:14.741 *********** 2025-06-22 20:22:14.409800 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:14.409806 | orchestrator | 2025-06-22 20:22:14.409811 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:22:14.409817 | orchestrator | Sunday 22 June 2025 20:22:13 +0000 (0:00:00.245) 0:00:14.987 *********** 2025-06-22 20:22:14.409822 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:14.409828 | orchestrator | 2025-06-22 20:22:14.409841 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:17.154578 | orchestrator | Sunday 22 June 2025 20:22:14 +0000 (0:00:00.270) 0:00:15.258 *********** 2025-06-22 20:22:17.154718 | orchestrator | 2025-06-22 20:22:17.154734 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:17.154747 | orchestrator | Sunday 22 June 2025 20:22:14 +0000 (0:00:00.093) 0:00:15.351 *********** 2025-06-22 20:22:17.154757 | orchestrator | 2025-06-22 20:22:17.154768 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:17.154779 | orchestrator | Sunday 22 June 2025 20:22:14 +0000 (0:00:00.075) 0:00:15.427 *********** 2025-06-22 20:22:17.154789 | orchestrator | 2025-06-22 20:22:17.154800 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-22 20:22:17.154810 | orchestrator | Sunday 22 June 2025 20:22:14 +0000 (0:00:00.078) 0:00:15.506 *********** 2025-06-22 20:22:17.154822 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:17.154832 | orchestrator | 2025-06-22 20:22:17.154843 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:22:17.154853 | orchestrator | Sunday 22 June 2025 20:22:16 +0000 (0:00:01.752) 0:00:17.258 *********** 2025-06-22 20:22:17.154864 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-22 20:22:17.154874 | orchestrator |  "msg": [ 2025-06-22 20:22:17.154887 | orchestrator |  "Validator run completed.", 2025-06-22 20:22:17.154898 | orchestrator |  "You can find the report file here:", 2025-06-22 20:22:17.154909 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-22T20:21:59+00:00-report.json", 2025-06-22 20:22:17.154920 | orchestrator |  "on the following host:", 2025-06-22 20:22:17.154931 | orchestrator |  "testbed-manager" 2025-06-22 20:22:17.154943 | orchestrator |  ] 2025-06-22 20:22:17.154954 | orchestrator | } 2025-06-22 20:22:17.154965 | orchestrator | 2025-06-22 20:22:17.154989 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:22:17.155001 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-22 20:22:17.155017 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:22:17.155029 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:22:17.155039 | orchestrator | 2025-06-22 20:22:17.155050 | orchestrator | 2025-06-22 20:22:17.155061 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:22:17.155071 | orchestrator | Sunday 22 June 2025 20:22:16 +0000 (0:00:00.634) 0:00:17.892 *********** 2025-06-22 20:22:17.155082 | orchestrator | =============================================================================== 2025-06-22 20:22:17.155092 | orchestrator | Write report file ------------------------------------------------------- 1.75s 2025-06-22 20:22:17.155103 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.61s 2025-06-22 20:22:17.155113 | orchestrator | Aggregate test results step one ----------------------------------------- 1.54s 2025-06-22 20:22:17.155126 | orchestrator | Gather status data ------------------------------------------------------ 1.29s 2025-06-22 20:22:17.155138 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-06-22 20:22:17.155150 | orchestrator | Create report output directory ------------------------------------------ 0.80s 2025-06-22 20:22:17.155161 | orchestrator | Aggregate test results step one ----------------------------------------- 0.69s 2025-06-22 20:22:17.155180 | orchestrator | Print report file information ------------------------------------------- 0.63s 2025-06-22 20:22:17.155199 | orchestrator | Get timestamp for report file ------------------------------------------- 0.59s 2025-06-22 20:22:17.155218 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-06-22 20:22:17.155237 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.35s 2025-06-22 20:22:17.155269 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.33s 2025-06-22 20:22:17.155289 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.31s 2025-06-22 20:22:17.155302 | orchestrator | Set quorum test data ---------------------------------------------------- 0.30s 2025-06-22 20:22:17.155314 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.30s 2025-06-22 20:22:17.155326 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2025-06-22 20:22:17.155339 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-06-22 20:22:17.155352 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-06-22 20:22:17.155371 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2025-06-22 20:22:17.155388 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2025-06-22 20:22:17.393777 | orchestrator | + osism validate ceph-mgrs 2025-06-22 20:22:19.085637 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:22:19.085757 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:22:19.085781 | orchestrator | Registering Redlock._release_script 2025-06-22 20:22:37.841069 | orchestrator | 2025-06-22 20:22:37.841319 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-22 20:22:37.841403 | orchestrator | 2025-06-22 20:22:37.841461 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-22 20:22:37.841482 | orchestrator | Sunday 22 June 2025 20:22:23 +0000 (0:00:00.440) 0:00:00.440 *********** 2025-06-22 20:22:37.841501 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:37.841542 | orchestrator | 2025-06-22 20:22:37.841577 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-22 20:22:37.841597 | orchestrator | Sunday 22 June 2025 20:22:24 +0000 (0:00:00.674) 0:00:01.114 *********** 2025-06-22 20:22:37.841617 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:37.841636 | orchestrator | 2025-06-22 20:22:37.841656 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-22 20:22:37.841677 | orchestrator | Sunday 22 June 2025 20:22:25 +0000 (0:00:00.905) 0:00:02.020 *********** 2025-06-22 20:22:37.841692 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:37.841706 | orchestrator | 2025-06-22 20:22:37.841719 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-22 20:22:37.841732 | orchestrator | Sunday 22 June 2025 20:22:25 +0000 (0:00:00.329) 0:00:02.350 *********** 2025-06-22 20:22:37.841745 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:37.841757 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:22:37.841769 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:22:37.841782 | orchestrator | 2025-06-22 20:22:37.841794 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-22 20:22:37.841809 | orchestrator | Sunday 22 June 2025 20:22:25 +0000 (0:00:00.305) 0:00:02.655 *********** 2025-06-22 20:22:37.841829 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:22:37.841847 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:22:37.841867 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:37.841885 | orchestrator | 2025-06-22 20:22:37.841904 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-22 20:22:37.841924 | orchestrator | Sunday 22 June 2025 20:22:26 +0000 (0:00:00.977) 0:00:03.632 *********** 2025-06-22 20:22:37.841944 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:37.841962 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:22:37.841980 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:22:37.841998 | orchestrator | 2025-06-22 20:22:37.842087 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-22 20:22:37.842113 | orchestrator | Sunday 22 June 2025 20:22:26 +0000 (0:00:00.275) 0:00:03.907 *********** 2025-06-22 20:22:37.842131 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:37.842150 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:22:37.842207 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:22:37.842228 | orchestrator | 2025-06-22 20:22:37.842248 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:22:37.842267 | orchestrator | Sunday 22 June 2025 20:22:27 +0000 (0:00:00.473) 0:00:04.380 *********** 2025-06-22 20:22:37.842287 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:37.842306 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:22:37.842322 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:22:37.842334 | orchestrator | 2025-06-22 20:22:37.842345 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-22 20:22:37.842356 | orchestrator | Sunday 22 June 2025 20:22:27 +0000 (0:00:00.286) 0:00:04.667 *********** 2025-06-22 20:22:37.842367 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:37.842379 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:22:37.842398 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:22:37.842417 | orchestrator | 2025-06-22 20:22:37.842481 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-22 20:22:37.842500 | orchestrator | Sunday 22 June 2025 20:22:27 +0000 (0:00:00.274) 0:00:04.942 *********** 2025-06-22 20:22:37.842519 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:37.842538 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:22:37.842556 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:22:37.842638 | orchestrator | 2025-06-22 20:22:37.842662 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:22:37.842683 | orchestrator | Sunday 22 June 2025 20:22:28 +0000 (0:00:00.307) 0:00:05.249 *********** 2025-06-22 20:22:37.842702 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:37.842723 | orchestrator | 2025-06-22 20:22:37.842742 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:22:37.842763 | orchestrator | Sunday 22 June 2025 20:22:28 +0000 (0:00:00.618) 0:00:05.868 *********** 2025-06-22 20:22:37.842783 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:37.842803 | orchestrator | 2025-06-22 20:22:37.842823 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:22:37.842842 | orchestrator | Sunday 22 June 2025 20:22:29 +0000 (0:00:00.247) 0:00:06.115 *********** 2025-06-22 20:22:37.842861 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:37.842881 | orchestrator | 2025-06-22 20:22:37.842901 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:37.842921 | orchestrator | Sunday 22 June 2025 20:22:29 +0000 (0:00:00.259) 0:00:06.375 *********** 2025-06-22 20:22:37.842935 | orchestrator | 2025-06-22 20:22:37.842947 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:37.842958 | orchestrator | Sunday 22 June 2025 20:22:29 +0000 (0:00:00.072) 0:00:06.447 *********** 2025-06-22 20:22:37.842968 | orchestrator | 2025-06-22 20:22:37.842979 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:37.842990 | orchestrator | Sunday 22 June 2025 20:22:29 +0000 (0:00:00.068) 0:00:06.515 *********** 2025-06-22 20:22:37.843001 | orchestrator | 2025-06-22 20:22:37.843012 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:22:37.843028 | orchestrator | Sunday 22 June 2025 20:22:29 +0000 (0:00:00.069) 0:00:06.584 *********** 2025-06-22 20:22:37.843046 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:37.843064 | orchestrator | 2025-06-22 20:22:37.843083 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-22 20:22:37.843102 | orchestrator | Sunday 22 June 2025 20:22:29 +0000 (0:00:00.253) 0:00:06.838 *********** 2025-06-22 20:22:37.843121 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:37.843139 | orchestrator | 2025-06-22 20:22:37.843183 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-22 20:22:37.843203 | orchestrator | Sunday 22 June 2025 20:22:30 +0000 (0:00:00.235) 0:00:07.073 *********** 2025-06-22 20:22:37.843221 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:37.843238 | orchestrator | 2025-06-22 20:22:37.843257 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-22 20:22:37.843292 | orchestrator | Sunday 22 June 2025 20:22:30 +0000 (0:00:00.111) 0:00:07.185 *********** 2025-06-22 20:22:37.843312 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:22:37.843331 | orchestrator | 2025-06-22 20:22:37.843349 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-22 20:22:37.843367 | orchestrator | Sunday 22 June 2025 20:22:32 +0000 (0:00:01.862) 0:00:09.048 *********** 2025-06-22 20:22:37.843386 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:37.843405 | orchestrator | 2025-06-22 20:22:37.843461 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-22 20:22:37.843482 | orchestrator | Sunday 22 June 2025 20:22:32 +0000 (0:00:00.252) 0:00:09.300 *********** 2025-06-22 20:22:37.843502 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:37.843521 | orchestrator | 2025-06-22 20:22:37.843540 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-22 20:22:37.843559 | orchestrator | Sunday 22 June 2025 20:22:33 +0000 (0:00:00.691) 0:00:09.992 *********** 2025-06-22 20:22:37.843577 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:37.843595 | orchestrator | 2025-06-22 20:22:37.843614 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-22 20:22:37.843632 | orchestrator | Sunday 22 June 2025 20:22:33 +0000 (0:00:00.162) 0:00:10.155 *********** 2025-06-22 20:22:37.843650 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:22:37.843668 | orchestrator | 2025-06-22 20:22:37.843686 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-22 20:22:37.843705 | orchestrator | Sunday 22 June 2025 20:22:33 +0000 (0:00:00.135) 0:00:10.290 *********** 2025-06-22 20:22:37.843724 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:37.843742 | orchestrator | 2025-06-22 20:22:37.843761 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-22 20:22:37.843779 | orchestrator | Sunday 22 June 2025 20:22:33 +0000 (0:00:00.265) 0:00:10.555 *********** 2025-06-22 20:22:37.843798 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:22:37.843817 | orchestrator | 2025-06-22 20:22:37.843846 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:22:37.843866 | orchestrator | Sunday 22 June 2025 20:22:33 +0000 (0:00:00.237) 0:00:10.793 *********** 2025-06-22 20:22:37.843883 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:37.843902 | orchestrator | 2025-06-22 20:22:37.843922 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:22:37.843940 | orchestrator | Sunday 22 June 2025 20:22:35 +0000 (0:00:01.272) 0:00:12.066 *********** 2025-06-22 20:22:37.843959 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:37.843977 | orchestrator | 2025-06-22 20:22:37.843996 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:22:37.844015 | orchestrator | Sunday 22 June 2025 20:22:35 +0000 (0:00:00.241) 0:00:12.307 *********** 2025-06-22 20:22:37.844034 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:37.844053 | orchestrator | 2025-06-22 20:22:37.844071 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:37.844091 | orchestrator | Sunday 22 June 2025 20:22:35 +0000 (0:00:00.261) 0:00:12.568 *********** 2025-06-22 20:22:37.844111 | orchestrator | 2025-06-22 20:22:37.844129 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:37.844205 | orchestrator | Sunday 22 June 2025 20:22:35 +0000 (0:00:00.082) 0:00:12.651 *********** 2025-06-22 20:22:37.844228 | orchestrator | 2025-06-22 20:22:37.844247 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:37.844266 | orchestrator | Sunday 22 June 2025 20:22:35 +0000 (0:00:00.065) 0:00:12.716 *********** 2025-06-22 20:22:37.844285 | orchestrator | 2025-06-22 20:22:37.844306 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-22 20:22:37.844340 | orchestrator | Sunday 22 June 2025 20:22:35 +0000 (0:00:00.069) 0:00:12.786 *********** 2025-06-22 20:22:37.844361 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:37.844380 | orchestrator | 2025-06-22 20:22:37.844400 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:22:37.844493 | orchestrator | Sunday 22 June 2025 20:22:37 +0000 (0:00:01.623) 0:00:14.410 *********** 2025-06-22 20:22:37.844519 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-22 20:22:37.844538 | orchestrator |  "msg": [ 2025-06-22 20:22:37.844558 | orchestrator |  "Validator run completed.", 2025-06-22 20:22:37.844577 | orchestrator |  "You can find the report file here:", 2025-06-22 20:22:37.844595 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-22T20:22:24+00:00-report.json", 2025-06-22 20:22:37.844614 | orchestrator |  "on the following host:", 2025-06-22 20:22:37.844632 | orchestrator |  "testbed-manager" 2025-06-22 20:22:37.844650 | orchestrator |  ] 2025-06-22 20:22:37.844668 | orchestrator | } 2025-06-22 20:22:37.844687 | orchestrator | 2025-06-22 20:22:37.844705 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:22:37.844727 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:22:37.844741 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:22:37.844768 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:22:38.152018 | orchestrator | 2025-06-22 20:22:38.152120 | orchestrator | 2025-06-22 20:22:38.152141 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:22:38.152161 | orchestrator | Sunday 22 June 2025 20:22:37 +0000 (0:00:00.403) 0:00:14.813 *********** 2025-06-22 20:22:38.152188 | orchestrator | =============================================================================== 2025-06-22 20:22:38.152211 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.86s 2025-06-22 20:22:38.152228 | orchestrator | Write report file ------------------------------------------------------- 1.62s 2025-06-22 20:22:38.152245 | orchestrator | Aggregate test results step one ----------------------------------------- 1.27s 2025-06-22 20:22:38.152262 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-06-22 20:22:38.152279 | orchestrator | Create report output directory ------------------------------------------ 0.91s 2025-06-22 20:22:38.152296 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.69s 2025-06-22 20:22:38.152314 | orchestrator | Get timestamp for report file ------------------------------------------- 0.67s 2025-06-22 20:22:38.152332 | orchestrator | Aggregate test results step one ----------------------------------------- 0.62s 2025-06-22 20:22:38.152348 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2025-06-22 20:22:38.152366 | orchestrator | Print report file information ------------------------------------------- 0.40s 2025-06-22 20:22:38.152384 | orchestrator | Define report vars ------------------------------------------------------ 0.33s 2025-06-22 20:22:38.152404 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2025-06-22 20:22:38.152452 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-06-22 20:22:38.152472 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2025-06-22 20:22:38.152492 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-06-22 20:22:38.152511 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.27s 2025-06-22 20:22:38.152546 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.27s 2025-06-22 20:22:38.152560 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-06-22 20:22:38.152604 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-06-22 20:22:38.152615 | orchestrator | Print report file information ------------------------------------------- 0.25s 2025-06-22 20:22:38.378969 | orchestrator | + osism validate ceph-osds 2025-06-22 20:22:40.031994 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:22:40.032091 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:22:40.032104 | orchestrator | Registering Redlock._release_script 2025-06-22 20:22:47.781706 | orchestrator | 2025-06-22 20:22:47.781804 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-22 20:22:47.781821 | orchestrator | 2025-06-22 20:22:47.781832 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-22 20:22:47.781844 | orchestrator | Sunday 22 June 2025 20:22:44 +0000 (0:00:00.317) 0:00:00.317 *********** 2025-06-22 20:22:47.781855 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:47.781865 | orchestrator | 2025-06-22 20:22:47.781876 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 20:22:47.781887 | orchestrator | Sunday 22 June 2025 20:22:44 +0000 (0:00:00.585) 0:00:00.902 *********** 2025-06-22 20:22:47.781897 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:47.781908 | orchestrator | 2025-06-22 20:22:47.781919 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-22 20:22:47.781929 | orchestrator | Sunday 22 June 2025 20:22:44 +0000 (0:00:00.311) 0:00:01.214 *********** 2025-06-22 20:22:47.781939 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:22:47.781950 | orchestrator | 2025-06-22 20:22:47.781961 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-22 20:22:47.781971 | orchestrator | Sunday 22 June 2025 20:22:45 +0000 (0:00:00.762) 0:00:01.977 *********** 2025-06-22 20:22:47.781982 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:47.781993 | orchestrator | 2025-06-22 20:22:47.782004 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-22 20:22:47.782071 | orchestrator | Sunday 22 June 2025 20:22:45 +0000 (0:00:00.122) 0:00:02.099 *********** 2025-06-22 20:22:47.782085 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:22:47.782096 | orchestrator | 2025-06-22 20:22:47.782106 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-22 20:22:47.782117 | orchestrator | Sunday 22 June 2025 20:22:45 +0000 (0:00:00.117) 0:00:02.216 *********** 2025-06-22 20:22:47.782128 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:22:47.782138 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:22:47.782149 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:22:47.782160 | orchestrator | 2025-06-22 20:22:47.782171 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-22 20:22:47.782182 | orchestrator | Sunday 22 June 2025 20:22:46 +0000 (0:00:00.271) 0:00:02.488 *********** 2025-06-22 20:22:47.782192 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:47.782203 | orchestrator | 2025-06-22 20:22:47.782214 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-22 20:22:47.782224 | orchestrator | Sunday 22 June 2025 20:22:46 +0000 (0:00:00.139) 0:00:02.627 *********** 2025-06-22 20:22:47.782235 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:47.782246 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:22:47.782257 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:22:47.782269 | orchestrator | 2025-06-22 20:22:47.782282 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-22 20:22:47.782294 | orchestrator | Sunday 22 June 2025 20:22:46 +0000 (0:00:00.279) 0:00:02.907 *********** 2025-06-22 20:22:47.782306 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:47.782318 | orchestrator | 2025-06-22 20:22:47.782331 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:22:47.782343 | orchestrator | Sunday 22 June 2025 20:22:47 +0000 (0:00:00.444) 0:00:03.351 *********** 2025-06-22 20:22:47.782376 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:47.782389 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:22:47.782401 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:22:47.782413 | orchestrator | 2025-06-22 20:22:47.782443 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-22 20:22:47.782456 | orchestrator | Sunday 22 June 2025 20:22:47 +0000 (0:00:00.396) 0:00:03.748 *********** 2025-06-22 20:22:47.782470 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd9e8b9602c1f87074dc69f8bfb3e02ebca0c83f4c8cc1eac06cf79462babe238', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-22 20:22:47.782484 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3c8387690e022ea8893c5e1f237669c8e40b9486c631719ae05e0957bbf0fd07', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 20:22:47.782497 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ac8e18cd3fb21b1b5d150421f927f9239fa2b67c0df3ad76244c489f097ae013', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 20:22:47.782511 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8eb1cd55aae9dfd65f934c5dd6f92f2a776496dd524cad9d9124de27a91b6aea', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-22 20:22:47.782536 | orchestrator | skipping: [testbed-node-3] => (item={'id': '94e52edd451673663b549c3dfdd26fc349563e9ee274aa055f7b53c623dab2af', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-22 20:22:47.782566 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fff5681ab16981b4d92f2b02a6b3e5ab92513eef316851fbed65deda14b458af', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:22:47.782589 | orchestrator | skipping: [testbed-node-3] => (item={'id': '61803b582fe486c65ad6aab6d8c98525ae3b66e8c3503a3ab8b58babbff76d9d', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-22 20:22:47.782602 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ab356fd04cf2f03d9966b6c61b476c547854c5639da228d56614529b6b890f21', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-22 20:22:47.782615 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fe1285900684f4dc0a3c552c2a9dd3866227f2eba29bd1af3dcc1a90d66331c6', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-06-22 20:22:47.782626 | orchestrator | skipping: [testbed-node-3] => (item={'id': '18e739a6e26a4178812c6f6e425643371c57c24075c89e9986247fe9b26da9bc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-22 20:22:47.782637 | orchestrator | skipping: [testbed-node-3] => (item={'id': '749cb7cf955cd8f76f150e4ae294e6e1a169c2802ab1c845db23ef770b1218db', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-22 20:22:47.782648 | orchestrator | skipping: [testbed-node-3] => (item={'id': '023f6511310fbdabcdc6a0965478a9a3310ec2e28bcaa10a07f44a09f580f5f7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-22 20:22:47.782669 | orchestrator | ok: [testbed-node-3] => (item={'id': '977f61fca6a48c7dec7354775465d8867ca51ce63b052bdb4b8cee9352a26a18', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 20:22:47.782681 | orchestrator | ok: [testbed-node-3] => (item={'id': '7e8fb2df4978a84af19bd680f31067177130fbbb4142114d250fe7c868e06886', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 20:22:47.782692 | orchestrator | skipping: [testbed-node-3] => (item={'id': '40effe237ab23d38054d208e7c8f6818a6ba569bdec6165318d991838f4a38a2', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-22 20:22:47.782703 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ccdcb77276466a0054c199710bd82fe8842d9a47bf5501410dd0404d8e73c557', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 20:22:47.782713 | orchestrator | skipping: [testbed-node-3] => (item={'id': '54286a69800abb5c3975ebfea3f02bcb039b179aff99a0780744de24a3ac016c', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 20:22:47.782724 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1022ef1a7af5e95c3a8558dba8f14e681ceb5143b44d1ac90f32e498d9506745', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:22:47.782736 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7ed771288d12880cbe480322c6ca6eb3f45602a95e3e1f72c5fcf91987edae92', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:22:47.782751 | orchestrator | skipping: [testbed-node-3] => (item={'id': '22432e62b20145e544a9ac7aece9d20f36ad1c89a1b1970146eac43b01b69b06', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-22 20:22:47.782769 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ae77654c0370b18ecb8e2e953009253c33d6682fa0902b9a1926d7c3f1ed187e', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-22 20:22:48.013274 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2a869612a7211d07b6b93c0ff3b8a3039ba006764ec6f290d93df2c386a96e67', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 20:22:48.013386 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bbef98272316149352bbca31ee46bd625319b64d9f7237fcb643775b63a434fc', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 20:22:48.013405 | orchestrator | skipping: [testbed-node-4] => (item={'id': '27da690f3ba0a7e011e091293499483151aaa7d080c70a5661ee77dc5c6562f4', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:22:48.013417 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7a68865ae09d7e084cb5ca878fc802c43136ddbc4ca4aa49924b1bd28f6d5995', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-22 20:22:48.013459 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f7d48f430e6921165c8d3a91ce6a35cbafa16fca829ec7e30518c7984886ec1d', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:22:48.013528 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f4cd60acf362f43b0dc12cc42b98dcdf0c195a66cafd91b117b3a33c52c3cbc4', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-22 20:22:48.013542 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a72b6da3cc0abf7f7319dd7deff57502204e510527dc8ac8f13473b8933b5461', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-22 20:22:48.013554 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2273dab5408640d366630cca40575a4326893b6131a96330d1c10e2cc6447477', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-06-22 20:22:48.013565 | orchestrator | skipping: [testbed-node-4] => (item={'id': '033f3351868111818295c6483a0cdaf7c1739e25aaab83d87d86abe0ef36df17', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-22 20:22:48.013577 | orchestrator | skipping: [testbed-node-4] => (item={'id': '822dd6603173c65574d8a242a658916de5a7611c5f9b31f64cb71a1ca20228f0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-22 20:22:48.013588 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f9c8458fc1b115d850f98440c8e10a1a67a9438681f1f879d9b08a719726a9c6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-22 20:22:48.013601 | orchestrator | ok: [testbed-node-4] => (item={'id': '913d0905063fa68fe508f1197f2e42286fe0c40ee1ea681f446aaa1c9b32992f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 20:22:48.013624 | orchestrator | ok: [testbed-node-4] => (item={'id': '28629b1f2dc7b6e857d43cd4d22fb2c7b304a8bb8b2873fefe99a318583aeeaa', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 20:22:48.013636 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c71d1a3c19a8dbc340bff7b1b76bbe400c79b62f2c0bdbef6807945b9604f04f', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-22 20:22:48.013665 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7c6ff951d84e618911d85907f1edb0378fbad1752f8371e4bac93ede05327d02', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 20:22:48.013677 | orchestrator | skipping: [testbed-node-4] => (item={'id': '14316a302ff2671c90a3d127505f89a000f7313dd8110daa8a837cd134f836db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 20:22:48.013689 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'caceca0b89d197330b4066d086485f334a5e8a9b615ac57e51e22c8f79709490', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:22:48.013700 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f82cef90b75c62940310f389ebfb4104a6fa1fbb0146a68d3b2f36d17997b0fb', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:22:48.013719 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b9d0ede0b0e629911c41922c7bd9ccfac4766305c7077c3b46c00d19bb0a7583', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-22 20:22:48.013731 | orchestrator | skipping: [testbed-node-5] => (item={'id': '54c81ee76039098c0362c610bfc701311317545147ddb52bfdc8c9b5e39bc4ea', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-22 20:22:48.013742 | orchestrator | skipping: [testbed-node-5] => (item={'id': '38176c6b6a6012fac87a9d0a6a4e9df24c833754bc2564682e155f1c6ae09db8', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 20:22:48.013753 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fa797f95ac883105f30cfaec5360db558b9c5be12708e8bb91a45b9a1afe7206', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 20:22:48.013764 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0ee7d5a1380c1662bc71a291d443ba8abb99111f1d5d7e32a534a80a7bbf8cf5', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:22:48.013775 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7fa9735e63607d7b8a1fe6b6ba8132182207cebe361727096208ff64dcff77cc', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-22 20:22:48.013786 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e25c6312026e2fa373c5d1316f7fa7a188422dc00bae469225d10e98a67d579f', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:22:48.013797 | orchestrator | skipping: [testbed-node-5] => (item={'id': '24b3fbb5a3cb9d596c7c0e28c934ed303b6013b4349edfd6a85981312431eb5e', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-22 20:22:48.013808 | orchestrator | skipping: [testbed-node-5] => (item={'id': '015fc58d03e37af879168ce67459e1156d755d85e280492155beabab872b3304', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-22 20:22:48.013819 | orchestrator | skipping: [testbed-node-5] => (item={'id': '557bb45c9cb8cba81249bfec1fa54823c1ea05d6a345867b6a0fcb105a4c5c88', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-06-22 20:22:48.013833 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3b6f831be1435259b3eb7bcf6cc3674dfd0e77af41edc23510432b45a3b870d4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-22 20:22:48.013853 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3b2dd4ddd738533e6a71d663f990422917d4f092f7f8b38cc834ec4c8090d9d9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-22 20:22:55.651856 | orchestrator | skipping: [testbed-node-5] => (item={'id': '03cfdaae54955bb2e5df4870284c10901cb76323b09916efb4bbe642170caee7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-22 20:22:55.652034 | orchestrator | ok: [testbed-node-5] => (item={'id': '3c7627208858f666465513791e3d289717132b0c1d1f058e1a8e589b81298829', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 20:22:55.652052 | orchestrator | ok: [testbed-node-5] => (item={'id': '05248732bafc2cd9c0e52b8e255d955b3b6869d1f2edf2132d790ac470368034', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 20:22:55.652065 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f9064d39c572d097634c192b5a513bca433731cd765a47222808e92b7f01126e', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-22 20:22:55.652078 | orchestrator | skipping: [testbed-node-5] => (item={'id': '32b91e744407836983d1b2f2f40abe2ea33049392071a549f6c437c4c1df754c', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 20:22:55.652092 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3b1b4ea72539aa8c1a50f62b554d5369513dd46e88c9330ac3febe4f0cb33639', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 20:22:55.652103 | orchestrator | skipping: [testbed-node-5] => (item={'id': '015d44fea542f5558166c83f888bde4f5e129b68231192963165268cfab30b7a', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:22:55.652114 | orchestrator | skipping: [testbed-node-5] => (item={'id': '08c9584fefdd460e9738d5e7e8a54b649a7f819f6b494e448c8e73029996481e', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:22:55.652125 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8f721299bd0cd2c502ab964544ec2e08137439fb92867a3fefbef43549073d60', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-22 20:22:55.652137 | orchestrator | 2025-06-22 20:22:55.652151 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-22 20:22:55.652164 | orchestrator | Sunday 22 June 2025 20:22:47 +0000 (0:00:00.468) 0:00:04.216 *********** 2025-06-22 20:22:55.652175 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:55.652187 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:22:55.652197 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:22:55.652208 | orchestrator | 2025-06-22 20:22:55.652219 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-22 20:22:55.652229 | orchestrator | Sunday 22 June 2025 20:22:48 +0000 (0:00:00.260) 0:00:04.477 *********** 2025-06-22 20:22:55.652240 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:22:55.652252 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:22:55.652262 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:22:55.652273 | orchestrator | 2025-06-22 20:22:55.652283 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-22 20:22:55.652294 | orchestrator | Sunday 22 June 2025 20:22:48 +0000 (0:00:00.357) 0:00:04.834 *********** 2025-06-22 20:22:55.652305 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:55.652315 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:22:55.652326 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:22:55.652338 | orchestrator | 2025-06-22 20:22:55.652350 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:22:55.652363 | orchestrator | Sunday 22 June 2025 20:22:48 +0000 (0:00:00.261) 0:00:05.095 *********** 2025-06-22 20:22:55.652382 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:55.652394 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:22:55.652407 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:22:55.652453 | orchestrator | 2025-06-22 20:22:55.652466 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-22 20:22:55.652478 | orchestrator | Sunday 22 June 2025 20:22:49 +0000 (0:00:00.251) 0:00:05.347 *********** 2025-06-22 20:22:55.652491 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-22 20:22:55.652505 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-22 20:22:55.652517 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:22:55.652529 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-22 20:22:55.652541 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-22 20:22:55.652574 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:22:55.652588 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-22 20:22:55.652600 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-22 20:22:55.652612 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:22:55.652624 | orchestrator | 2025-06-22 20:22:55.652636 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-22 20:22:55.652648 | orchestrator | Sunday 22 June 2025 20:22:49 +0000 (0:00:00.260) 0:00:05.608 *********** 2025-06-22 20:22:55.652661 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:55.652674 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:22:55.652686 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:22:55.652697 | orchestrator | 2025-06-22 20:22:55.652707 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-22 20:22:55.652718 | orchestrator | Sunday 22 June 2025 20:22:49 +0000 (0:00:00.442) 0:00:06.050 *********** 2025-06-22 20:22:55.652728 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:22:55.652739 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:22:55.652750 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:22:55.652760 | orchestrator | 2025-06-22 20:22:55.652771 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-22 20:22:55.652781 | orchestrator | Sunday 22 June 2025 20:22:50 +0000 (0:00:00.282) 0:00:06.333 *********** 2025-06-22 20:22:55.652792 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:22:55.652802 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:22:55.652813 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:22:55.652824 | orchestrator | 2025-06-22 20:22:55.652835 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-22 20:22:55.652845 | orchestrator | Sunday 22 June 2025 20:22:50 +0000 (0:00:00.290) 0:00:06.624 *********** 2025-06-22 20:22:55.652856 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:55.652867 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:22:55.652877 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:22:55.652888 | orchestrator | 2025-06-22 20:22:55.652898 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:22:55.652909 | orchestrator | Sunday 22 June 2025 20:22:50 +0000 (0:00:00.294) 0:00:06.918 *********** 2025-06-22 20:22:55.652920 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:22:55.652930 | orchestrator | 2025-06-22 20:22:55.652941 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:22:55.652952 | orchestrator | Sunday 22 June 2025 20:22:51 +0000 (0:00:00.630) 0:00:07.549 *********** 2025-06-22 20:22:55.652963 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:22:55.652973 | orchestrator | 2025-06-22 20:22:55.652984 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:22:55.652994 | orchestrator | Sunday 22 June 2025 20:22:51 +0000 (0:00:00.250) 0:00:07.800 *********** 2025-06-22 20:22:55.653005 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:22:55.653016 | orchestrator | 2025-06-22 20:22:55.653026 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:55.653045 | orchestrator | Sunday 22 June 2025 20:22:51 +0000 (0:00:00.232) 0:00:08.032 *********** 2025-06-22 20:22:55.653056 | orchestrator | 2025-06-22 20:22:55.653067 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:55.653077 | orchestrator | Sunday 22 June 2025 20:22:51 +0000 (0:00:00.071) 0:00:08.103 *********** 2025-06-22 20:22:55.653088 | orchestrator | 2025-06-22 20:22:55.653098 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:22:55.653109 | orchestrator | Sunday 22 June 2025 20:22:51 +0000 (0:00:00.067) 0:00:08.171 *********** 2025-06-22 20:22:55.653119 | orchestrator | 2025-06-22 20:22:55.653130 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:22:55.653140 | orchestrator | Sunday 22 June 2025 20:22:52 +0000 (0:00:00.070) 0:00:08.241 *********** 2025-06-22 20:22:55.653151 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:22:55.653162 | orchestrator | 2025-06-22 20:22:55.653172 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-22 20:22:55.653183 | orchestrator | Sunday 22 June 2025 20:22:52 +0000 (0:00:00.229) 0:00:08.470 *********** 2025-06-22 20:22:55.653193 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:22:55.653204 | orchestrator | 2025-06-22 20:22:55.653215 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:22:55.653226 | orchestrator | Sunday 22 June 2025 20:22:52 +0000 (0:00:00.229) 0:00:08.700 *********** 2025-06-22 20:22:55.653236 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:55.653247 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:22:55.653257 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:22:55.653268 | orchestrator | 2025-06-22 20:22:55.653279 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-22 20:22:55.653289 | orchestrator | Sunday 22 June 2025 20:22:52 +0000 (0:00:00.268) 0:00:08.968 *********** 2025-06-22 20:22:55.653300 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:55.653310 | orchestrator | 2025-06-22 20:22:55.653321 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-22 20:22:55.653337 | orchestrator | Sunday 22 June 2025 20:22:53 +0000 (0:00:00.601) 0:00:09.570 *********** 2025-06-22 20:22:55.653348 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:22:55.653358 | orchestrator | 2025-06-22 20:22:55.653369 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-22 20:22:55.653380 | orchestrator | Sunday 22 June 2025 20:22:55 +0000 (0:00:01.750) 0:00:11.320 *********** 2025-06-22 20:22:55.653390 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:55.653401 | orchestrator | 2025-06-22 20:22:55.653412 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-22 20:22:55.653422 | orchestrator | Sunday 22 June 2025 20:22:55 +0000 (0:00:00.125) 0:00:11.445 *********** 2025-06-22 20:22:55.653460 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:22:55.653471 | orchestrator | 2025-06-22 20:22:55.653482 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-22 20:22:55.653493 | orchestrator | Sunday 22 June 2025 20:22:55 +0000 (0:00:00.303) 0:00:11.749 *********** 2025-06-22 20:22:55.653510 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:23:07.955889 | orchestrator | 2025-06-22 20:23:07.955970 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-22 20:23:07.955978 | orchestrator | Sunday 22 June 2025 20:22:55 +0000 (0:00:00.112) 0:00:11.862 *********** 2025-06-22 20:23:07.955982 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:23:07.955988 | orchestrator | 2025-06-22 20:23:07.955992 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:23:07.955996 | orchestrator | Sunday 22 June 2025 20:22:55 +0000 (0:00:00.126) 0:00:11.988 *********** 2025-06-22 20:23:07.956000 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:23:07.956003 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:23:07.956007 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:23:07.956027 | orchestrator | 2025-06-22 20:23:07.956031 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-22 20:23:07.956035 | orchestrator | Sunday 22 June 2025 20:22:56 +0000 (0:00:00.287) 0:00:12.275 *********** 2025-06-22 20:23:07.956039 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:23:07.956043 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:23:07.956047 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:23:07.956051 | orchestrator | 2025-06-22 20:23:07.956055 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-22 20:23:07.956059 | orchestrator | Sunday 22 June 2025 20:22:58 +0000 (0:00:02.774) 0:00:15.050 *********** 2025-06-22 20:23:07.956062 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:23:07.956066 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:23:07.956070 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:23:07.956074 | orchestrator | 2025-06-22 20:23:07.956078 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-22 20:23:07.956081 | orchestrator | Sunday 22 June 2025 20:22:59 +0000 (0:00:00.299) 0:00:15.350 *********** 2025-06-22 20:23:07.956085 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:23:07.956089 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:23:07.956092 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:23:07.956096 | orchestrator | 2025-06-22 20:23:07.956100 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-22 20:23:07.956103 | orchestrator | Sunday 22 June 2025 20:22:59 +0000 (0:00:00.488) 0:00:15.839 *********** 2025-06-22 20:23:07.956107 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:23:07.956111 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:23:07.956114 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:23:07.956118 | orchestrator | 2025-06-22 20:23:07.956122 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-22 20:23:07.956125 | orchestrator | Sunday 22 June 2025 20:22:59 +0000 (0:00:00.272) 0:00:16.111 *********** 2025-06-22 20:23:07.956129 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:23:07.956133 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:23:07.956136 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:23:07.956140 | orchestrator | 2025-06-22 20:23:07.956144 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-22 20:23:07.956147 | orchestrator | Sunday 22 June 2025 20:23:00 +0000 (0:00:00.476) 0:00:16.587 *********** 2025-06-22 20:23:07.956151 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:23:07.956155 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:23:07.956158 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:23:07.956162 | orchestrator | 2025-06-22 20:23:07.956166 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-22 20:23:07.956169 | orchestrator | Sunday 22 June 2025 20:23:00 +0000 (0:00:00.307) 0:00:16.894 *********** 2025-06-22 20:23:07.956173 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:23:07.956177 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:23:07.956180 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:23:07.956184 | orchestrator | 2025-06-22 20:23:07.956188 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:23:07.956192 | orchestrator | Sunday 22 June 2025 20:23:00 +0000 (0:00:00.282) 0:00:17.177 *********** 2025-06-22 20:23:07.956196 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:23:07.956199 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:23:07.956203 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:23:07.956207 | orchestrator | 2025-06-22 20:23:07.956210 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-22 20:23:07.956214 | orchestrator | Sunday 22 June 2025 20:23:01 +0000 (0:00:00.477) 0:00:17.654 *********** 2025-06-22 20:23:07.956218 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:23:07.956221 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:23:07.956225 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:23:07.956228 | orchestrator | 2025-06-22 20:23:07.956232 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-22 20:23:07.956240 | orchestrator | Sunday 22 June 2025 20:23:02 +0000 (0:00:00.699) 0:00:18.354 *********** 2025-06-22 20:23:07.956244 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:23:07.956248 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:23:07.956251 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:23:07.956255 | orchestrator | 2025-06-22 20:23:07.956259 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-22 20:23:07.956263 | orchestrator | Sunday 22 June 2025 20:23:02 +0000 (0:00:00.300) 0:00:18.654 *********** 2025-06-22 20:23:07.956266 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:23:07.956270 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:23:07.956274 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:23:07.956278 | orchestrator | 2025-06-22 20:23:07.956281 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-22 20:23:07.956285 | orchestrator | Sunday 22 June 2025 20:23:02 +0000 (0:00:00.304) 0:00:18.959 *********** 2025-06-22 20:23:07.956289 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:23:07.956293 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:23:07.956296 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:23:07.956300 | orchestrator | 2025-06-22 20:23:07.956304 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-22 20:23:07.956308 | orchestrator | Sunday 22 June 2025 20:23:03 +0000 (0:00:00.292) 0:00:19.251 *********** 2025-06-22 20:23:07.956312 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:23:07.956316 | orchestrator | 2025-06-22 20:23:07.956319 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-22 20:23:07.956323 | orchestrator | Sunday 22 June 2025 20:23:03 +0000 (0:00:00.630) 0:00:19.881 *********** 2025-06-22 20:23:07.956327 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:23:07.956331 | orchestrator | 2025-06-22 20:23:07.956344 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:23:07.956348 | orchestrator | Sunday 22 June 2025 20:23:03 +0000 (0:00:00.232) 0:00:20.114 *********** 2025-06-22 20:23:07.956352 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:23:07.956355 | orchestrator | 2025-06-22 20:23:07.956359 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:23:07.956363 | orchestrator | Sunday 22 June 2025 20:23:05 +0000 (0:00:01.579) 0:00:21.693 *********** 2025-06-22 20:23:07.956366 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:23:07.956370 | orchestrator | 2025-06-22 20:23:07.956374 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:23:07.956378 | orchestrator | Sunday 22 June 2025 20:23:05 +0000 (0:00:00.244) 0:00:21.938 *********** 2025-06-22 20:23:07.956381 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:23:07.956385 | orchestrator | 2025-06-22 20:23:07.956389 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:23:07.956392 | orchestrator | Sunday 22 June 2025 20:23:05 +0000 (0:00:00.240) 0:00:22.178 *********** 2025-06-22 20:23:07.956396 | orchestrator | 2025-06-22 20:23:07.956400 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:23:07.956403 | orchestrator | Sunday 22 June 2025 20:23:06 +0000 (0:00:00.077) 0:00:22.256 *********** 2025-06-22 20:23:07.956407 | orchestrator | 2025-06-22 20:23:07.956411 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:23:07.956414 | orchestrator | Sunday 22 June 2025 20:23:06 +0000 (0:00:00.070) 0:00:22.326 *********** 2025-06-22 20:23:07.956418 | orchestrator | 2025-06-22 20:23:07.956422 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-22 20:23:07.956426 | orchestrator | Sunday 22 June 2025 20:23:06 +0000 (0:00:00.069) 0:00:22.395 *********** 2025-06-22 20:23:07.956429 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:23:07.956480 | orchestrator | 2025-06-22 20:23:07.956485 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:23:07.956493 | orchestrator | Sunday 22 June 2025 20:23:07 +0000 (0:00:01.194) 0:00:23.589 *********** 2025-06-22 20:23:07.956498 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-22 20:23:07.956502 | orchestrator |  "msg": [ 2025-06-22 20:23:07.956507 | orchestrator |  "Validator run completed.", 2025-06-22 20:23:07.956512 | orchestrator |  "You can find the report file here:", 2025-06-22 20:23:07.956516 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-22T20:22:44+00:00-report.json", 2025-06-22 20:23:07.956522 | orchestrator |  "on the following host:", 2025-06-22 20:23:07.956526 | orchestrator |  "testbed-manager" 2025-06-22 20:23:07.956531 | orchestrator |  ] 2025-06-22 20:23:07.956535 | orchestrator | } 2025-06-22 20:23:07.956540 | orchestrator | 2025-06-22 20:23:07.956545 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:23:07.956550 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-22 20:23:07.956556 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:23:07.956560 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:23:07.956564 | orchestrator | 2025-06-22 20:23:07.956568 | orchestrator | 2025-06-22 20:23:07.956607 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:23:07.956611 | orchestrator | Sunday 22 June 2025 20:23:07 +0000 (0:00:00.541) 0:00:24.131 *********** 2025-06-22 20:23:07.956616 | orchestrator | =============================================================================== 2025-06-22 20:23:07.956620 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.77s 2025-06-22 20:23:07.956624 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.75s 2025-06-22 20:23:07.956628 | orchestrator | Aggregate test results step one ----------------------------------------- 1.58s 2025-06-22 20:23:07.956633 | orchestrator | Write report file ------------------------------------------------------- 1.19s 2025-06-22 20:23:07.956637 | orchestrator | Create report output directory ------------------------------------------ 0.76s 2025-06-22 20:23:07.956641 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.70s 2025-06-22 20:23:07.956646 | orchestrator | Aggregate test results step one ----------------------------------------- 0.63s 2025-06-22 20:23:07.956652 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.63s 2025-06-22 20:23:07.956656 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.60s 2025-06-22 20:23:07.956661 | orchestrator | Get timestamp for report file ------------------------------------------- 0.59s 2025-06-22 20:23:07.956665 | orchestrator | Print report file information ------------------------------------------- 0.54s 2025-06-22 20:23:07.956669 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2025-06-22 20:23:07.956673 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-06-22 20:23:07.956678 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.48s 2025-06-22 20:23:07.956682 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.47s 2025-06-22 20:23:07.956686 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.44s 2025-06-22 20:23:07.956694 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.44s 2025-06-22 20:23:08.221121 | orchestrator | Prepare test data ------------------------------------------------------- 0.40s 2025-06-22 20:23:08.221193 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.36s 2025-06-22 20:23:08.221198 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.31s 2025-06-22 20:23:08.493634 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-22 20:23:08.502605 | orchestrator | + set -e 2025-06-22 20:23:08.502619 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 20:23:08.502624 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 20:23:08.502629 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 20:23:08.502633 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 20:23:08.502637 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 20:23:08.502641 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 20:23:08.502646 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 20:23:08.502698 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 20:23:08.502703 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 20:23:08.502707 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 20:23:08.502711 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 20:23:08.502715 | orchestrator | ++ export ARA=false 2025-06-22 20:23:08.502719 | orchestrator | ++ ARA=false 2025-06-22 20:23:08.502723 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 20:23:08.502727 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 20:23:08.502731 | orchestrator | ++ export TEMPEST=false 2025-06-22 20:23:08.502735 | orchestrator | ++ TEMPEST=false 2025-06-22 20:23:08.502780 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 20:23:08.502786 | orchestrator | ++ IS_ZUUL=true 2025-06-22 20:23:08.502790 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.98 2025-06-22 20:23:08.502794 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.98 2025-06-22 20:23:08.502879 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 20:23:08.502885 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 20:23:08.502889 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 20:23:08.502893 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 20:23:08.502941 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 20:23:08.502947 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 20:23:08.502951 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 20:23:08.502955 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 20:23:08.503133 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-22 20:23:08.503139 | orchestrator | + source /etc/os-release 2025-06-22 20:23:08.503181 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-22 20:23:08.503238 | orchestrator | ++ NAME=Ubuntu 2025-06-22 20:23:08.503244 | orchestrator | ++ VERSION_ID=24.04 2025-06-22 20:23:08.503332 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-22 20:23:08.503339 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-22 20:23:08.503343 | orchestrator | ++ ID=ubuntu 2025-06-22 20:23:08.503346 | orchestrator | ++ ID_LIKE=debian 2025-06-22 20:23:08.503350 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-22 20:23:08.503354 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-22 20:23:08.503465 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-22 20:23:08.503473 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-22 20:23:08.503478 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-22 20:23:08.503482 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-22 20:23:08.503486 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-22 20:23:08.503627 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-22 20:23:08.503691 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-22 20:23:08.536791 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-22 20:23:31.179530 | orchestrator | 2025-06-22 20:23:31.179643 | orchestrator | # Status of Elasticsearch 2025-06-22 20:23:31.179661 | orchestrator | 2025-06-22 20:23:31.179674 | orchestrator | + pushd /opt/configuration/contrib 2025-06-22 20:23:31.179687 | orchestrator | + echo 2025-06-22 20:23:31.179699 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-22 20:23:31.179710 | orchestrator | + echo 2025-06-22 20:23:31.179722 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-22 20:23:31.359556 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-22 20:23:31.359651 | orchestrator | 2025-06-22 20:23:31.359666 | orchestrator | # Status of MariaDB 2025-06-22 20:23:31.359678 | orchestrator | 2025-06-22 20:23:31.359689 | orchestrator | + echo 2025-06-22 20:23:31.359700 | orchestrator | + echo '# Status of MariaDB' 2025-06-22 20:23:31.359736 | orchestrator | + echo 2025-06-22 20:23:31.359747 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-22 20:23:31.359758 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-22 20:23:31.417841 | orchestrator | Reading package lists... 2025-06-22 20:23:31.726818 | orchestrator | Building dependency tree... 2025-06-22 20:23:31.727529 | orchestrator | Reading state information... 2025-06-22 20:23:32.064807 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-22 20:23:32.064909 | orchestrator | bc set to manually installed. 2025-06-22 20:23:32.064923 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-22 20:23:32.662343 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-22 20:23:32.662482 | orchestrator | 2025-06-22 20:23:32.662495 | orchestrator | # Status of Prometheus 2025-06-22 20:23:32.662504 | orchestrator | 2025-06-22 20:23:32.662511 | orchestrator | + echo 2025-06-22 20:23:32.662518 | orchestrator | + echo '# Status of Prometheus' 2025-06-22 20:23:32.662525 | orchestrator | + echo 2025-06-22 20:23:32.662533 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-22 20:23:32.711856 | orchestrator | Unauthorized 2025-06-22 20:23:32.715132 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-22 20:23:32.763286 | orchestrator | Unauthorized 2025-06-22 20:23:32.766689 | orchestrator | 2025-06-22 20:23:32.766744 | orchestrator | # Status of RabbitMQ 2025-06-22 20:23:32.766759 | orchestrator | 2025-06-22 20:23:32.766771 | orchestrator | + echo 2025-06-22 20:23:32.766783 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-22 20:23:32.766794 | orchestrator | + echo 2025-06-22 20:23:32.766825 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-22 20:23:33.202760 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-22 20:23:33.211080 | orchestrator | 2025-06-22 20:23:33.211134 | orchestrator | # Status of Redis 2025-06-22 20:23:33.211155 | orchestrator | + echo 2025-06-22 20:23:33.211171 | orchestrator | + echo '# Status of Redis' 2025-06-22 20:23:33.211184 | orchestrator | 2025-06-22 20:23:33.211195 | orchestrator | + echo 2025-06-22 20:23:33.211209 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-22 20:23:33.216061 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002154s;;;0.000000;10.000000 2025-06-22 20:23:33.216349 | orchestrator | + popd 2025-06-22 20:23:33.216642 | orchestrator | 2025-06-22 20:23:33.216663 | orchestrator | + echo 2025-06-22 20:23:33.216674 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-22 20:23:33.216687 | orchestrator | # Create backup of MariaDB database 2025-06-22 20:23:33.216698 | orchestrator | 2025-06-22 20:23:33.216709 | orchestrator | + echo 2025-06-22 20:23:33.216720 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-22 20:23:34.959926 | orchestrator | 2025-06-22 20:23:34 | INFO  | Task 43b2df2a-7fa6-4387-bc47-d121abb01f78 (mariadb_backup) was prepared for execution. 2025-06-22 20:23:34.960012 | orchestrator | 2025-06-22 20:23:34 | INFO  | It takes a moment until task 43b2df2a-7fa6-4387-bc47-d121abb01f78 (mariadb_backup) has been started and output is visible here. 2025-06-22 20:23:38.832282 | orchestrator | 2025-06-22 20:23:38.833374 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:23:38.834618 | orchestrator | 2025-06-22 20:23:38.836815 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:23:38.838104 | orchestrator | Sunday 22 June 2025 20:23:38 +0000 (0:00:00.175) 0:00:00.175 *********** 2025-06-22 20:23:39.033036 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:23:39.161753 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:23:39.162329 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:23:39.167875 | orchestrator | 2025-06-22 20:23:39.167928 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:23:39.169066 | orchestrator | Sunday 22 June 2025 20:23:39 +0000 (0:00:00.332) 0:00:00.508 *********** 2025-06-22 20:23:39.708426 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-22 20:23:39.711724 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-22 20:23:39.711823 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-22 20:23:39.712352 | orchestrator | 2025-06-22 20:23:39.712584 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-22 20:23:39.713307 | orchestrator | 2025-06-22 20:23:39.713333 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-22 20:23:39.713730 | orchestrator | Sunday 22 June 2025 20:23:39 +0000 (0:00:00.546) 0:00:01.054 *********** 2025-06-22 20:23:40.110242 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 20:23:40.113819 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-22 20:23:40.113852 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-22 20:23:40.113864 | orchestrator | 2025-06-22 20:23:40.113876 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 20:23:40.114631 | orchestrator | Sunday 22 June 2025 20:23:40 +0000 (0:00:00.401) 0:00:01.456 *********** 2025-06-22 20:23:40.637645 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:23:40.637747 | orchestrator | 2025-06-22 20:23:40.637763 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-22 20:23:40.637786 | orchestrator | Sunday 22 June 2025 20:23:40 +0000 (0:00:00.525) 0:00:01.981 *********** 2025-06-22 20:23:43.682218 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:23:43.684506 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:23:43.686007 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:23:43.687087 | orchestrator | 2025-06-22 20:23:43.687930 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-22 20:23:43.689871 | orchestrator | Sunday 22 June 2025 20:23:43 +0000 (0:00:03.042) 0:00:05.023 *********** 2025-06-22 20:24:42.682935 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-22 20:24:42.683054 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-22 20:24:42.683468 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-22 20:24:42.684102 | orchestrator | mariadb_bootstrap_restart 2025-06-22 20:24:42.753438 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:42.754274 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:42.755777 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:24:42.759630 | orchestrator | 2025-06-22 20:24:42.759694 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-22 20:24:42.759709 | orchestrator | skipping: no hosts matched 2025-06-22 20:24:42.759721 | orchestrator | 2025-06-22 20:24:42.759732 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-22 20:24:42.760120 | orchestrator | skipping: no hosts matched 2025-06-22 20:24:42.761116 | orchestrator | 2025-06-22 20:24:42.761815 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-22 20:24:42.762281 | orchestrator | skipping: no hosts matched 2025-06-22 20:24:42.763255 | orchestrator | 2025-06-22 20:24:42.763602 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-22 20:24:42.764465 | orchestrator | 2025-06-22 20:24:42.764841 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-22 20:24:42.765700 | orchestrator | Sunday 22 June 2025 20:24:42 +0000 (0:00:59.076) 0:01:04.100 *********** 2025-06-22 20:24:42.947822 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:43.069440 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:43.071328 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:43.074234 | orchestrator | 2025-06-22 20:24:43.076541 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-22 20:24:43.077919 | orchestrator | Sunday 22 June 2025 20:24:43 +0000 (0:00:00.315) 0:01:04.415 *********** 2025-06-22 20:24:43.420998 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:43.468309 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:43.469123 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:43.470115 | orchestrator | 2025-06-22 20:24:43.471645 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:24:43.472026 | orchestrator | 2025-06-22 20:24:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 20:24:43.472619 | orchestrator | 2025-06-22 20:24:43 | INFO  | Please wait and do not abort execution. 2025-06-22 20:24:43.473993 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:43.474895 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 20:24:43.476250 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 20:24:43.476951 | orchestrator | 2025-06-22 20:24:43.477908 | orchestrator | 2025-06-22 20:24:43.478911 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:24:43.479833 | orchestrator | Sunday 22 June 2025 20:24:43 +0000 (0:00:00.398) 0:01:04.814 *********** 2025-06-22 20:24:43.480471 | orchestrator | =============================================================================== 2025-06-22 20:24:43.480880 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 59.08s 2025-06-22 20:24:43.481458 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.04s 2025-06-22 20:24:43.481998 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-06-22 20:24:43.482706 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.53s 2025-06-22 20:24:43.483209 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2025-06-22 20:24:43.483908 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.40s 2025-06-22 20:24:43.484272 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-22 20:24:43.484812 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2025-06-22 20:24:43.997764 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-22 20:24:44.011106 | orchestrator | + set -e 2025-06-22 20:24:44.011207 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 20:24:44.011224 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 20:24:44.011238 | orchestrator | ++ INTERACTIVE=false 2025-06-22 20:24:44.011249 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 20:24:44.011260 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 20:24:44.011272 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-22 20:24:44.014452 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-22 20:24:44.022304 | orchestrator | 2025-06-22 20:24:44.022356 | orchestrator | # OpenStack endpoints 2025-06-22 20:24:44.022402 | orchestrator | 2025-06-22 20:24:44.022421 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 20:24:44.022434 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 20:24:44.022448 | orchestrator | + export OS_CLOUD=admin 2025-06-22 20:24:44.022467 | orchestrator | + OS_CLOUD=admin 2025-06-22 20:24:44.022514 | orchestrator | + echo 2025-06-22 20:24:44.022534 | orchestrator | + echo '# OpenStack endpoints' 2025-06-22 20:24:44.022553 | orchestrator | + echo 2025-06-22 20:24:44.022571 | orchestrator | + openstack endpoint list 2025-06-22 20:24:47.350876 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-22 20:24:47.350978 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-22 20:24:47.350993 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-22 20:24:47.351005 | orchestrator | | 00bbe925dc294f848c2a017cf61aec26 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-22 20:24:47.351042 | orchestrator | | 026696263f4f48ca9580c7ce61333d87 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-22 20:24:47.351053 | orchestrator | | 18bbc7be7ce54e9b96fb22bd841b3543 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-22 20:24:47.351064 | orchestrator | | 27da098a09704ad38040f1895631af96 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-22 20:24:47.351075 | orchestrator | | 2d42a713c0954f49be012d2fdd7830c6 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-22 20:24:47.351085 | orchestrator | | 38e9b2a2a91c4d83934d1aebd481f04f | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-22 20:24:47.351096 | orchestrator | | 3ddcaf0162b248a6aca8ec3d03cf197c | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-22 20:24:47.351107 | orchestrator | | 499452778fc24af1b4720a8378546f3e | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-22 20:24:47.351118 | orchestrator | | 5499a25dd3064142bb2d2505cf4c4493 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-22 20:24:47.351129 | orchestrator | | 6332e4730884429cbabbc95d957fbf3a | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-22 20:24:47.351139 | orchestrator | | 64dd494e53384c86b5a29b443049b4be | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-22 20:24:47.351150 | orchestrator | | 78141cd6fbad48298d68468737e90bde | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-22 20:24:47.351161 | orchestrator | | 7b63ee69e4644b5cae151036e9279ce8 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-22 20:24:47.351171 | orchestrator | | 8f2661a4f77e4c0782196a2f3d991e89 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-22 20:24:47.351182 | orchestrator | | a2f7a183c4ae47599a62e6678163ed27 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-22 20:24:47.351194 | orchestrator | | b78317c1fe144caca5d0697c139aa2f7 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-22 20:24:47.351205 | orchestrator | | c1ed6dd1fe6c4d16beadb1d40b05e692 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-22 20:24:47.351215 | orchestrator | | d23916947308490c8f666c7935520f26 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-22 20:24:47.351226 | orchestrator | | d69b98d0f3374160b29e68947efdfbc1 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-22 20:24:47.351237 | orchestrator | | dc30304386144cd7bae053310e501041 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-22 20:24:47.351264 | orchestrator | | ea66fb5ab5864820bd0d3bfcfe3582a8 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-22 20:24:47.351284 | orchestrator | | f1946120831d4cc591c1ef53cf094a48 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-22 20:24:47.351294 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-22 20:24:47.606810 | orchestrator | 2025-06-22 20:24:47.606907 | orchestrator | # Cinder 2025-06-22 20:24:47.606921 | orchestrator | 2025-06-22 20:24:47.606933 | orchestrator | + echo 2025-06-22 20:24:47.606944 | orchestrator | + echo '# Cinder' 2025-06-22 20:24:47.606956 | orchestrator | + echo 2025-06-22 20:24:47.606968 | orchestrator | + openstack volume service list 2025-06-22 20:24:50.257885 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-22 20:24:50.257981 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-22 20:24:50.257995 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-22 20:24:50.258008 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-22T20:24:43.000000 | 2025-06-22 20:24:50.258077 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-22T20:24:43.000000 | 2025-06-22 20:24:50.258104 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-22T20:24:42.000000 | 2025-06-22 20:24:50.258116 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-22T20:24:42.000000 | 2025-06-22 20:24:50.258127 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-22T20:24:44.000000 | 2025-06-22 20:24:50.258138 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-22T20:24:44.000000 | 2025-06-22 20:24:50.258149 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-22T20:24:47.000000 | 2025-06-22 20:24:50.258160 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-22T20:24:47.000000 | 2025-06-22 20:24:50.258175 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-22T20:24:47.000000 | 2025-06-22 20:24:50.258186 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-22 20:24:50.493412 | orchestrator | 2025-06-22 20:24:50.493552 | orchestrator | # Neutron 2025-06-22 20:24:50.493568 | orchestrator | 2025-06-22 20:24:50.493580 | orchestrator | + echo 2025-06-22 20:24:50.493591 | orchestrator | + echo '# Neutron' 2025-06-22 20:24:50.493604 | orchestrator | + echo 2025-06-22 20:24:50.493615 | orchestrator | + openstack network agent list 2025-06-22 20:24:53.455700 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-22 20:24:53.455791 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-22 20:24:53.455806 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-22 20:24:53.455818 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-22 20:24:53.455829 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-22 20:24:53.455840 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-22 20:24:53.455851 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-22 20:24:53.455886 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-22 20:24:53.455898 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-22 20:24:53.455909 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-22 20:24:53.455919 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-22 20:24:53.455930 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-22 20:24:53.455941 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-22 20:24:53.631040 | orchestrator | + openstack network service provider list 2025-06-22 20:24:56.073576 | orchestrator | +---------------+------+---------+ 2025-06-22 20:24:56.073692 | orchestrator | | Service Type | Name | Default | 2025-06-22 20:24:56.073707 | orchestrator | +---------------+------+---------+ 2025-06-22 20:24:56.073720 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-22 20:24:56.073731 | orchestrator | +---------------+------+---------+ 2025-06-22 20:24:56.364303 | orchestrator | 2025-06-22 20:24:56.364399 | orchestrator | # Nova 2025-06-22 20:24:56.364414 | orchestrator | 2025-06-22 20:24:56.364425 | orchestrator | + echo 2025-06-22 20:24:56.364437 | orchestrator | + echo '# Nova' 2025-06-22 20:24:56.364449 | orchestrator | + echo 2025-06-22 20:24:56.364460 | orchestrator | + openstack compute service list 2025-06-22 20:24:59.457740 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-22 20:24:59.457863 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-22 20:24:59.457878 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-22 20:24:59.457890 | orchestrator | | cb100b55-3bfa-4696-81a3-b9720d8c7758 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-22T20:24:54.000000 | 2025-06-22 20:24:59.457902 | orchestrator | | 66051570-82e8-4e9c-855e-3e30912e6056 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-22T20:24:58.000000 | 2025-06-22 20:24:59.457913 | orchestrator | | 4823c9f7-6a52-43b7-8013-b1b08b6d955e | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-22T20:24:50.000000 | 2025-06-22 20:24:59.457924 | orchestrator | | 4ea50d04-9378-4b74-b497-1c300f100b7d | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-22T20:24:51.000000 | 2025-06-22 20:24:59.457935 | orchestrator | | 77932aea-3a7c-4191-b363-852dc149994a | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-22T20:24:52.000000 | 2025-06-22 20:24:59.457946 | orchestrator | | e342ffbb-910f-4a90-a620-54820a257e31 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-22T20:24:53.000000 | 2025-06-22 20:24:59.457957 | orchestrator | | 7ccbed57-c392-49ce-bac8-7c7cdf85982e | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-22T20:24:52.000000 | 2025-06-22 20:24:59.457967 | orchestrator | | 2f0e0a2e-a688-41d7-8a09-689910401d30 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-22T20:24:53.000000 | 2025-06-22 20:24:59.457999 | orchestrator | | ad718071-5f09-4f92-bb63-1b05a4035cf1 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-22T20:24:53.000000 | 2025-06-22 20:24:59.458010 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-22 20:24:59.725454 | orchestrator | + openstack hypervisor list 2025-06-22 20:25:04.713434 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-22 20:25:04.713617 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-22 20:25:04.713635 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-22 20:25:04.713647 | orchestrator | | 088de825-5a60-4f52-94f2-4cad017b117b | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-22 20:25:04.713658 | orchestrator | | 019d24fa-2466-4f5d-95b3-bfbd57079dba | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-22 20:25:04.713669 | orchestrator | | b5b2b652-9cfc-4dd9-baed-4487ca5e568b | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-22 20:25:04.713684 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-22 20:25:04.970114 | orchestrator | 2025-06-22 20:25:04.970226 | orchestrator | # Run OpenStack test play 2025-06-22 20:25:04.970248 | orchestrator | 2025-06-22 20:25:04.970267 | orchestrator | + echo 2025-06-22 20:25:04.970286 | orchestrator | + echo '# Run OpenStack test play' 2025-06-22 20:25:04.970305 | orchestrator | + echo 2025-06-22 20:25:04.970323 | orchestrator | + osism apply --environment openstack test 2025-06-22 20:25:06.665677 | orchestrator | 2025-06-22 20:25:06 | INFO  | Trying to run play test in environment openstack 2025-06-22 20:25:06.670683 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:25:06.670754 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:25:06.670774 | orchestrator | Registering Redlock._release_script 2025-06-22 20:25:06.731779 | orchestrator | 2025-06-22 20:25:06 | INFO  | Task 10eb18bb-4519-44b7-9ce5-75dafcb40d65 (test) was prepared for execution. 2025-06-22 20:25:06.731875 | orchestrator | 2025-06-22 20:25:06 | INFO  | It takes a moment until task 10eb18bb-4519-44b7-9ce5-75dafcb40d65 (test) has been started and output is visible here. 2025-06-22 20:25:10.484283 | orchestrator | 2025-06-22 20:25:10.484593 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-22 20:25:10.484630 | orchestrator | 2025-06-22 20:25:10.484656 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-22 20:25:10.485302 | orchestrator | Sunday 22 June 2025 20:25:10 +0000 (0:00:00.060) 0:00:00.060 *********** 2025-06-22 20:25:13.779368 | orchestrator | changed: [localhost] 2025-06-22 20:25:13.779947 | orchestrator | 2025-06-22 20:25:13.780963 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-22 20:25:13.782718 | orchestrator | Sunday 22 June 2025 20:25:13 +0000 (0:00:03.295) 0:00:03.355 *********** 2025-06-22 20:25:17.982234 | orchestrator | changed: [localhost] 2025-06-22 20:25:17.982358 | orchestrator | 2025-06-22 20:25:17.982920 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-22 20:25:17.983609 | orchestrator | Sunday 22 June 2025 20:25:17 +0000 (0:00:04.203) 0:00:07.559 *********** 2025-06-22 20:25:23.952621 | orchestrator | changed: [localhost] 2025-06-22 20:25:23.952992 | orchestrator | 2025-06-22 20:25:23.953781 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-22 20:25:23.955665 | orchestrator | Sunday 22 June 2025 20:25:23 +0000 (0:00:05.968) 0:00:13.528 *********** 2025-06-22 20:25:27.863817 | orchestrator | changed: [localhost] 2025-06-22 20:25:27.866156 | orchestrator | 2025-06-22 20:25:27.866518 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-22 20:25:27.866811 | orchestrator | Sunday 22 June 2025 20:25:27 +0000 (0:00:03.911) 0:00:17.439 *********** 2025-06-22 20:25:31.976614 | orchestrator | changed: [localhost] 2025-06-22 20:25:31.977607 | orchestrator | 2025-06-22 20:25:31.978274 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-22 20:25:31.979676 | orchestrator | Sunday 22 June 2025 20:25:31 +0000 (0:00:04.110) 0:00:21.550 *********** 2025-06-22 20:25:43.073454 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-22 20:25:43.073575 | orchestrator | changed: [localhost] => (item=member) 2025-06-22 20:25:43.073592 | orchestrator | changed: [localhost] => (item=creator) 2025-06-22 20:25:43.073605 | orchestrator | 2025-06-22 20:25:43.073618 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-22 20:25:43.074489 | orchestrator | Sunday 22 June 2025 20:25:43 +0000 (0:00:11.097) 0:00:32.647 *********** 2025-06-22 20:25:47.426831 | orchestrator | changed: [localhost] 2025-06-22 20:25:47.427062 | orchestrator | 2025-06-22 20:25:47.428195 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-22 20:25:47.429202 | orchestrator | Sunday 22 June 2025 20:25:47 +0000 (0:00:04.354) 0:00:37.002 *********** 2025-06-22 20:25:52.297267 | orchestrator | changed: [localhost] 2025-06-22 20:25:52.297894 | orchestrator | 2025-06-22 20:25:52.300373 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-22 20:25:52.300402 | orchestrator | Sunday 22 June 2025 20:25:52 +0000 (0:00:04.871) 0:00:41.874 *********** 2025-06-22 20:25:56.617762 | orchestrator | changed: [localhost] 2025-06-22 20:25:56.619252 | orchestrator | 2025-06-22 20:25:56.620775 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-22 20:25:56.621551 | orchestrator | Sunday 22 June 2025 20:25:56 +0000 (0:00:04.320) 0:00:46.194 *********** 2025-06-22 20:26:00.556127 | orchestrator | changed: [localhost] 2025-06-22 20:26:00.556816 | orchestrator | 2025-06-22 20:26:00.558482 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-22 20:26:00.558952 | orchestrator | Sunday 22 June 2025 20:26:00 +0000 (0:00:03.938) 0:00:50.132 *********** 2025-06-22 20:26:04.451177 | orchestrator | changed: [localhost] 2025-06-22 20:26:04.451310 | orchestrator | 2025-06-22 20:26:04.451781 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-22 20:26:04.451811 | orchestrator | Sunday 22 June 2025 20:26:04 +0000 (0:00:03.892) 0:00:54.025 *********** 2025-06-22 20:26:08.711227 | orchestrator | changed: [localhost] 2025-06-22 20:26:08.711707 | orchestrator | 2025-06-22 20:26:08.713707 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-22 20:26:08.715145 | orchestrator | Sunday 22 June 2025 20:26:08 +0000 (0:00:04.262) 0:00:58.288 *********** 2025-06-22 20:26:22.430407 | orchestrator | changed: [localhost] 2025-06-22 20:26:22.430632 | orchestrator | 2025-06-22 20:26:22.430669 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-22 20:26:22.430691 | orchestrator | Sunday 22 June 2025 20:26:22 +0000 (0:00:13.716) 0:01:12.004 *********** 2025-06-22 20:28:40.384607 | orchestrator | changed: [localhost] => (item=test) 2025-06-22 20:28:40.384843 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-22 20:28:40.385177 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-22 20:28:40.385204 | orchestrator | 2025-06-22 20:28:40.385218 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-22 20:29:10.383846 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-22 20:29:10.383973 | orchestrator | 2025-06-22 20:29:10.383990 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-22 20:29:40.388046 | orchestrator | 2025-06-22 20:29:40.388162 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-22 20:29:41.667037 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-22 20:29:41.668074 | orchestrator | 2025-06-22 20:29:41.668892 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-22 20:29:41.670158 | orchestrator | Sunday 22 June 2025 20:29:41 +0000 (0:03:19.239) 0:04:31.244 *********** 2025-06-22 20:30:04.536640 | orchestrator | changed: [localhost] => (item=test) 2025-06-22 20:30:04.536759 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-22 20:30:04.537878 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-22 20:30:04.539262 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-22 20:30:04.539554 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-22 20:30:04.540061 | orchestrator | 2025-06-22 20:30:04.540534 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-22 20:30:04.541022 | orchestrator | Sunday 22 June 2025 20:30:04 +0000 (0:00:22.867) 0:04:54.111 *********** 2025-06-22 20:30:36.050345 | orchestrator | changed: [localhost] => (item=test) 2025-06-22 20:30:36.050532 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-22 20:30:36.050559 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-22 20:30:36.051499 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-22 20:30:36.053917 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-22 20:30:36.054771 | orchestrator | 2025-06-22 20:30:36.054995 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-22 20:30:36.055476 | orchestrator | Sunday 22 June 2025 20:30:36 +0000 (0:00:31.507) 0:05:25.619 *********** 2025-06-22 20:30:42.946407 | orchestrator | changed: [localhost] 2025-06-22 20:30:42.946525 | orchestrator | 2025-06-22 20:30:42.946542 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-22 20:30:42.946555 | orchestrator | Sunday 22 June 2025 20:30:42 +0000 (0:00:06.902) 0:05:32.521 *********** 2025-06-22 20:30:56.818297 | orchestrator | changed: [localhost] 2025-06-22 20:30:56.818432 | orchestrator | 2025-06-22 20:30:56.818451 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-22 20:30:56.818465 | orchestrator | Sunday 22 June 2025 20:30:56 +0000 (0:00:13.868) 0:05:46.390 *********** 2025-06-22 20:31:01.643966 | orchestrator | ok: [localhost] 2025-06-22 20:31:01.644078 | orchestrator | 2025-06-22 20:31:01.644096 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-22 20:31:01.644354 | orchestrator | Sunday 22 June 2025 20:31:01 +0000 (0:00:04.827) 0:05:51.218 *********** 2025-06-22 20:31:01.689283 | orchestrator | ok: [localhost] => { 2025-06-22 20:31:01.692851 | orchestrator |  "msg": "192.168.112.144" 2025-06-22 20:31:01.692961 | orchestrator | } 2025-06-22 20:31:01.693755 | orchestrator | 2025-06-22 20:31:01.694585 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:31:01.695367 | orchestrator | 2025-06-22 20:31:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 20:31:01.695535 | orchestrator | 2025-06-22 20:31:01 | INFO  | Please wait and do not abort execution. 2025-06-22 20:31:01.696627 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:31:01.696971 | orchestrator | 2025-06-22 20:31:01.697960 | orchestrator | 2025-06-22 20:31:01.699363 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:31:01.699949 | orchestrator | Sunday 22 June 2025 20:31:01 +0000 (0:00:00.046) 0:05:51.265 *********** 2025-06-22 20:31:01.700822 | orchestrator | =============================================================================== 2025-06-22 20:31:01.700862 | orchestrator | Create test instances ------------------------------------------------- 199.24s 2025-06-22 20:31:01.701202 | orchestrator | Add tag to instances --------------------------------------------------- 31.51s 2025-06-22 20:31:01.701576 | orchestrator | Add metadata to instances ---------------------------------------------- 22.87s 2025-06-22 20:31:01.702204 | orchestrator | Attach test volume ----------------------------------------------------- 13.87s 2025-06-22 20:31:01.702631 | orchestrator | Create test network topology ------------------------------------------- 13.72s 2025-06-22 20:31:01.703376 | orchestrator | Add member roles to user test ------------------------------------------ 11.10s 2025-06-22 20:31:01.704446 | orchestrator | Create test volume ------------------------------------------------------ 6.90s 2025-06-22 20:31:01.704477 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.97s 2025-06-22 20:31:01.704801 | orchestrator | Create ssh security group ----------------------------------------------- 4.87s 2025-06-22 20:31:01.705109 | orchestrator | Create floating ip address ---------------------------------------------- 4.83s 2025-06-22 20:31:01.705730 | orchestrator | Create test server group ------------------------------------------------ 4.35s 2025-06-22 20:31:01.706389 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.32s 2025-06-22 20:31:01.706656 | orchestrator | Create test keypair ----------------------------------------------------- 4.26s 2025-06-22 20:31:01.707436 | orchestrator | Create test-admin user -------------------------------------------------- 4.20s 2025-06-22 20:31:01.707736 | orchestrator | Create test user -------------------------------------------------------- 4.11s 2025-06-22 20:31:01.708189 | orchestrator | Create icmp security group ---------------------------------------------- 3.94s 2025-06-22 20:31:01.708715 | orchestrator | Create test project ----------------------------------------------------- 3.91s 2025-06-22 20:31:01.709087 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.89s 2025-06-22 20:31:01.709395 | orchestrator | Create test domain ------------------------------------------------------ 3.30s 2025-06-22 20:31:01.709754 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-06-22 20:31:02.220935 | orchestrator | + server_list 2025-06-22 20:31:02.221040 | orchestrator | + openstack --os-cloud test server list 2025-06-22 20:31:05.916497 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-22 20:31:05.916658 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-22 20:31:05.916674 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-22 20:31:05.916686 | orchestrator | | f51a8ad9-c132-42ad-bfd8-9e663b9ca94d | test-4 | ACTIVE | auto_allocated_network=10.42.0.9, 192.168.112.129 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:31:05.916698 | orchestrator | | b1ed4a78-37da-4b54-bdcc-fb2924f787d1 | test-3 | ACTIVE | auto_allocated_network=10.42.0.13, 192.168.112.112 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:31:05.916709 | orchestrator | | f83a701c-53ff-4506-9425-238424c400a9 | test-2 | ACTIVE | auto_allocated_network=10.42.0.20, 192.168.112.165 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:31:05.916720 | orchestrator | | 543bd764-7d68-47a3-9ed1-1d7956176028 | test-1 | ACTIVE | auto_allocated_network=10.42.0.3, 192.168.112.168 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:31:05.916731 | orchestrator | | 523f4327-5bac-453d-80b2-9fecf9b05339 | test | ACTIVE | auto_allocated_network=10.42.0.59, 192.168.112.144 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:31:05.916742 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-22 20:31:06.168013 | orchestrator | + openstack --os-cloud test server show test 2025-06-22 20:31:09.600104 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:09.600216 | orchestrator | | Field | Value | 2025-06-22 20:31:09.600232 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:09.600243 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:31:09.600255 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:31:09.600285 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:31:09.600297 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-22 20:31:09.600309 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:31:09.600320 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:31:09.600331 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:31:09.600351 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:31:09.600380 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:31:09.600392 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:31:09.600403 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:31:09.600414 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:31:09.600432 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:31:09.600443 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:31:09.600459 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:31:09.600470 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:26:51.000000 | 2025-06-22 20:31:09.600482 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:31:09.600493 | orchestrator | | accessIPv4 | | 2025-06-22 20:31:09.600503 | orchestrator | | accessIPv6 | | 2025-06-22 20:31:09.600515 | orchestrator | | addresses | auto_allocated_network=10.42.0.59, 192.168.112.144 | 2025-06-22 20:31:09.600532 | orchestrator | | config_drive | | 2025-06-22 20:31:09.600571 | orchestrator | | created | 2025-06-22T20:26:30Z | 2025-06-22 20:31:09.600583 | orchestrator | | description | None | 2025-06-22 20:31:09.600600 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:31:09.600611 | orchestrator | | hostId | a79d9b0511c6ab9b0d0e430dba007971657af0d1fc1a3a8f0bc85110 | 2025-06-22 20:31:09.600623 | orchestrator | | host_status | None | 2025-06-22 20:31:09.600638 | orchestrator | | id | 523f4327-5bac-453d-80b2-9fecf9b05339 | 2025-06-22 20:31:09.600650 | orchestrator | | image | Cirros 0.6.2 (fb338899-033a-46fe-a447-efdf6d08a885) | 2025-06-22 20:31:09.600661 | orchestrator | | key_name | test | 2025-06-22 20:31:09.600672 | orchestrator | | locked | False | 2025-06-22 20:31:09.600683 | orchestrator | | locked_reason | None | 2025-06-22 20:31:09.600694 | orchestrator | | name | test | 2025-06-22 20:31:09.600711 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:31:09.600723 | orchestrator | | progress | 0 | 2025-06-22 20:31:09.600744 | orchestrator | | project_id | 0cb3a802bef349fa994da306ed4f0045 | 2025-06-22 20:31:09.600755 | orchestrator | | properties | hostname='test' | 2025-06-22 20:31:09.600766 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:31:09.600781 | orchestrator | | | name='ssh' | 2025-06-22 20:31:09.600792 | orchestrator | | server_groups | None | 2025-06-22 20:31:09.600804 | orchestrator | | status | ACTIVE | 2025-06-22 20:31:09.600815 | orchestrator | | tags | test | 2025-06-22 20:31:09.600826 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:31:09.600838 | orchestrator | | updated | 2025-06-22T20:29:46Z | 2025-06-22 20:31:09.600854 | orchestrator | | user_id | ecf3bd849ab748268084661ce13727bd | 2025-06-22 20:31:09.600865 | orchestrator | | volumes_attached | delete_on_termination='False', id='8e412023-150b-4008-a39a-084423bb38df' | 2025-06-22 20:31:09.600883 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:09.863388 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-22 20:31:13.015303 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:13.015415 | orchestrator | | Field | Value | 2025-06-22 20:31:13.015431 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:13.015461 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:31:13.015474 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:31:13.015486 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:31:13.015498 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-22 20:31:13.015510 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:31:13.015523 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:31:13.015588 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:31:13.015601 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:31:13.015631 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:31:13.015643 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:31:13.015655 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:31:13.015666 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:31:13.015677 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:31:13.015688 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:31:13.015699 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:31:13.015710 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:27:37.000000 | 2025-06-22 20:31:13.015721 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:31:13.015739 | orchestrator | | accessIPv4 | | 2025-06-22 20:31:13.015759 | orchestrator | | accessIPv6 | | 2025-06-22 20:31:13.015770 | orchestrator | | addresses | auto_allocated_network=10.42.0.3, 192.168.112.168 | 2025-06-22 20:31:13.015788 | orchestrator | | config_drive | | 2025-06-22 20:31:13.015800 | orchestrator | | created | 2025-06-22T20:27:14Z | 2025-06-22 20:31:13.015811 | orchestrator | | description | None | 2025-06-22 20:31:13.015827 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:31:13.015839 | orchestrator | | hostId | 680f033c96be86a1b3e5ab144de8d19a3f7c198656305b756b57fe89 | 2025-06-22 20:31:13.015853 | orchestrator | | host_status | None | 2025-06-22 20:31:13.015866 | orchestrator | | id | 543bd764-7d68-47a3-9ed1-1d7956176028 | 2025-06-22 20:31:13.015882 | orchestrator | | image | Cirros 0.6.2 (fb338899-033a-46fe-a447-efdf6d08a885) | 2025-06-22 20:31:13.015912 | orchestrator | | key_name | test | 2025-06-22 20:31:13.015932 | orchestrator | | locked | False | 2025-06-22 20:31:13.016097 | orchestrator | | locked_reason | None | 2025-06-22 20:31:13.016124 | orchestrator | | name | test-1 | 2025-06-22 20:31:13.016158 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:31:13.016180 | orchestrator | | progress | 0 | 2025-06-22 20:31:13.016201 | orchestrator | | project_id | 0cb3a802bef349fa994da306ed4f0045 | 2025-06-22 20:31:13.016230 | orchestrator | | properties | hostname='test-1' | 2025-06-22 20:31:13.016243 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:31:13.016254 | orchestrator | | | name='ssh' | 2025-06-22 20:31:13.016265 | orchestrator | | server_groups | None | 2025-06-22 20:31:13.016286 | orchestrator | | status | ACTIVE | 2025-06-22 20:31:13.016298 | orchestrator | | tags | test | 2025-06-22 20:31:13.016309 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:31:13.016320 | orchestrator | | updated | 2025-06-22T20:29:50Z | 2025-06-22 20:31:13.016337 | orchestrator | | user_id | ecf3bd849ab748268084661ce13727bd | 2025-06-22 20:31:13.016349 | orchestrator | | volumes_attached | | 2025-06-22 20:31:13.016360 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:13.254350 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-22 20:31:16.286374 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:16.286455 | orchestrator | | Field | Value | 2025-06-22 20:31:16.286463 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:16.286486 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:31:16.286492 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:31:16.286497 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:31:16.286503 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-22 20:31:16.286508 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:31:16.286513 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:31:16.286518 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:31:16.286524 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:31:16.286555 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:31:16.286565 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:31:16.286571 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:31:16.286581 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:31:16.286586 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:31:16.286592 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:31:16.286597 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:31:16.286618 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:28:20.000000 | 2025-06-22 20:31:16.286624 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:31:16.286629 | orchestrator | | accessIPv4 | | 2025-06-22 20:31:16.286634 | orchestrator | | accessIPv6 | | 2025-06-22 20:31:16.286639 | orchestrator | | addresses | auto_allocated_network=10.42.0.20, 192.168.112.165 | 2025-06-22 20:31:16.286648 | orchestrator | | config_drive | | 2025-06-22 20:31:16.286657 | orchestrator | | created | 2025-06-22T20:27:59Z | 2025-06-22 20:31:16.286672 | orchestrator | | description | None | 2025-06-22 20:31:16.286677 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:31:16.286683 | orchestrator | | hostId | 17736c924b1b6985c7fb2deb75e04fa7f10a788433a0ea9595a9f3d4 | 2025-06-22 20:31:16.286688 | orchestrator | | host_status | None | 2025-06-22 20:31:16.286708 | orchestrator | | id | f83a701c-53ff-4506-9425-238424c400a9 | 2025-06-22 20:31:16.286714 | orchestrator | | image | Cirros 0.6.2 (fb338899-033a-46fe-a447-efdf6d08a885) | 2025-06-22 20:31:16.286720 | orchestrator | | key_name | test | 2025-06-22 20:31:16.286725 | orchestrator | | locked | False | 2025-06-22 20:31:16.286730 | orchestrator | | locked_reason | None | 2025-06-22 20:31:16.286735 | orchestrator | | name | test-2 | 2025-06-22 20:31:16.286745 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:31:16.286754 | orchestrator | | progress | 0 | 2025-06-22 20:31:16.286760 | orchestrator | | project_id | 0cb3a802bef349fa994da306ed4f0045 | 2025-06-22 20:31:16.286765 | orchestrator | | properties | hostname='test-2' | 2025-06-22 20:31:16.286775 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:31:16.286780 | orchestrator | | | name='ssh' | 2025-06-22 20:31:16.286786 | orchestrator | | server_groups | None | 2025-06-22 20:31:16.286791 | orchestrator | | status | ACTIVE | 2025-06-22 20:31:16.286796 | orchestrator | | tags | test | 2025-06-22 20:31:16.286801 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:31:16.286806 | orchestrator | | updated | 2025-06-22T20:29:55Z | 2025-06-22 20:31:16.286818 | orchestrator | | user_id | ecf3bd849ab748268084661ce13727bd | 2025-06-22 20:31:16.286826 | orchestrator | | volumes_attached | | 2025-06-22 20:31:16.291619 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:16.550635 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-22 20:31:19.655928 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:19.656032 | orchestrator | | Field | Value | 2025-06-22 20:31:19.656047 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:19.656060 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:31:19.656072 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:31:19.656083 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:31:19.656095 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-22 20:31:19.656106 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:31:19.656144 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:31:19.656156 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:31:19.656182 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:31:19.656211 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:31:19.656224 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:31:19.656236 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:31:19.656248 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:31:19.656260 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:31:19.656272 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:31:19.656284 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:31:19.656296 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:28:53.000000 | 2025-06-22 20:31:19.656317 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:31:19.656329 | orchestrator | | accessIPv4 | | 2025-06-22 20:31:19.656346 | orchestrator | | accessIPv6 | | 2025-06-22 20:31:19.656358 | orchestrator | | addresses | auto_allocated_network=10.42.0.13, 192.168.112.112 | 2025-06-22 20:31:19.656375 | orchestrator | | config_drive | | 2025-06-22 20:31:19.656388 | orchestrator | | created | 2025-06-22T20:28:37Z | 2025-06-22 20:31:19.656400 | orchestrator | | description | None | 2025-06-22 20:31:19.656412 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:31:19.656424 | orchestrator | | hostId | a79d9b0511c6ab9b0d0e430dba007971657af0d1fc1a3a8f0bc85110 | 2025-06-22 20:31:19.656436 | orchestrator | | host_status | None | 2025-06-22 20:31:19.656448 | orchestrator | | id | b1ed4a78-37da-4b54-bdcc-fb2924f787d1 | 2025-06-22 20:31:19.656467 | orchestrator | | image | Cirros 0.6.2 (fb338899-033a-46fe-a447-efdf6d08a885) | 2025-06-22 20:31:19.656479 | orchestrator | | key_name | test | 2025-06-22 20:31:19.656491 | orchestrator | | locked | False | 2025-06-22 20:31:19.656513 | orchestrator | | locked_reason | None | 2025-06-22 20:31:19.656603 | orchestrator | | name | test-3 | 2025-06-22 20:31:19.656634 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:31:19.656652 | orchestrator | | progress | 0 | 2025-06-22 20:31:19.656669 | orchestrator | | project_id | 0cb3a802bef349fa994da306ed4f0045 | 2025-06-22 20:31:19.656687 | orchestrator | | properties | hostname='test-3' | 2025-06-22 20:31:19.656705 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:31:19.656736 | orchestrator | | | name='ssh' | 2025-06-22 20:31:19.656757 | orchestrator | | server_groups | None | 2025-06-22 20:31:19.656768 | orchestrator | | status | ACTIVE | 2025-06-22 20:31:19.656779 | orchestrator | | tags | test | 2025-06-22 20:31:19.656790 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:31:19.656808 | orchestrator | | updated | 2025-06-22T20:29:59Z | 2025-06-22 20:31:19.656825 | orchestrator | | user_id | ecf3bd849ab748268084661ce13727bd | 2025-06-22 20:31:19.656837 | orchestrator | | volumes_attached | | 2025-06-22 20:31:19.661017 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:19.916621 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-22 20:31:23.151389 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:23.151515 | orchestrator | | Field | Value | 2025-06-22 20:31:23.151617 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:23.151637 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:31:23.151654 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:31:23.151674 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:31:23.151692 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-22 20:31:23.151709 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:31:23.151727 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:31:23.151746 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:31:23.151766 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:31:23.151810 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:31:23.151830 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:31:23.151863 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:31:23.151883 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:31:23.151902 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:31:23.151921 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:31:23.151963 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:31:23.151986 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:29:26.000000 | 2025-06-22 20:31:23.152014 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:31:23.152036 | orchestrator | | accessIPv4 | | 2025-06-22 20:31:23.152057 | orchestrator | | accessIPv6 | | 2025-06-22 20:31:23.152079 | orchestrator | | addresses | auto_allocated_network=10.42.0.9, 192.168.112.129 | 2025-06-22 20:31:23.152122 | orchestrator | | config_drive | | 2025-06-22 20:31:23.152145 | orchestrator | | created | 2025-06-22T20:29:09Z | 2025-06-22 20:31:23.152166 | orchestrator | | description | None | 2025-06-22 20:31:23.152187 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:31:23.152210 | orchestrator | | hostId | 680f033c96be86a1b3e5ab144de8d19a3f7c198656305b756b57fe89 | 2025-06-22 20:31:23.152232 | orchestrator | | host_status | None | 2025-06-22 20:31:23.152254 | orchestrator | | id | f51a8ad9-c132-42ad-bfd8-9e663b9ca94d | 2025-06-22 20:31:23.152279 | orchestrator | | image | Cirros 0.6.2 (fb338899-033a-46fe-a447-efdf6d08a885) | 2025-06-22 20:31:23.152298 | orchestrator | | key_name | test | 2025-06-22 20:31:23.152318 | orchestrator | | locked | False | 2025-06-22 20:31:23.152338 | orchestrator | | locked_reason | None | 2025-06-22 20:31:23.152367 | orchestrator | | name | test-4 | 2025-06-22 20:31:23.152397 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:31:23.152416 | orchestrator | | progress | 0 | 2025-06-22 20:31:23.152436 | orchestrator | | project_id | 0cb3a802bef349fa994da306ed4f0045 | 2025-06-22 20:31:23.152455 | orchestrator | | properties | hostname='test-4' | 2025-06-22 20:31:23.152474 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:31:23.152494 | orchestrator | | | name='ssh' | 2025-06-22 20:31:23.152513 | orchestrator | | server_groups | None | 2025-06-22 20:31:23.152566 | orchestrator | | status | ACTIVE | 2025-06-22 20:31:23.152586 | orchestrator | | tags | test | 2025-06-22 20:31:23.152605 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:31:23.152636 | orchestrator | | updated | 2025-06-22T20:30:04Z | 2025-06-22 20:31:23.152665 | orchestrator | | user_id | ecf3bd849ab748268084661ce13727bd | 2025-06-22 20:31:23.152685 | orchestrator | | volumes_attached | | 2025-06-22 20:31:23.152705 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:31:23.410883 | orchestrator | + server_ping 2025-06-22 20:31:23.412470 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-22 20:31:23.412505 | orchestrator | ++ tr -d '\r' 2025-06-22 20:31:26.331239 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:31:26.331350 | orchestrator | + ping -c3 192.168.112.129 2025-06-22 20:31:26.348378 | orchestrator | PING 192.168.112.129 (192.168.112.129) 56(84) bytes of data. 2025-06-22 20:31:26.348423 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=1 ttl=63 time=11.9 ms 2025-06-22 20:31:27.339613 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=2 ttl=63 time=1.85 ms 2025-06-22 20:31:28.340593 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=3 ttl=63 time=2.01 ms 2025-06-22 20:31:28.340698 | orchestrator | 2025-06-22 20:31:28.340713 | orchestrator | --- 192.168.112.129 ping statistics --- 2025-06-22 20:31:28.340726 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-22 20:31:28.340737 | orchestrator | rtt min/avg/max/mdev = 1.854/5.248/11.879/4.689 ms 2025-06-22 20:31:28.341639 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:31:28.341665 | orchestrator | + ping -c3 192.168.112.168 2025-06-22 20:31:28.353579 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2025-06-22 20:31:28.353647 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=7.05 ms 2025-06-22 20:31:29.351081 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=1.60 ms 2025-06-22 20:31:30.351184 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.57 ms 2025-06-22 20:31:30.351311 | orchestrator | 2025-06-22 20:31:30.351333 | orchestrator | --- 192.168.112.168 ping statistics --- 2025-06-22 20:31:30.351351 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:31:30.351366 | orchestrator | rtt min/avg/max/mdev = 1.573/3.405/7.049/2.576 ms 2025-06-22 20:31:30.352019 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:31:30.352063 | orchestrator | + ping -c3 192.168.112.165 2025-06-22 20:31:30.361385 | orchestrator | PING 192.168.112.165 (192.168.112.165) 56(84) bytes of data. 2025-06-22 20:31:30.361455 | orchestrator | 64 bytes from 192.168.112.165: icmp_seq=1 ttl=63 time=5.01 ms 2025-06-22 20:31:31.360748 | orchestrator | 64 bytes from 192.168.112.165: icmp_seq=2 ttl=63 time=2.09 ms 2025-06-22 20:31:32.361710 | orchestrator | 64 bytes from 192.168.112.165: icmp_seq=3 ttl=63 time=1.50 ms 2025-06-22 20:31:32.361841 | orchestrator | 2025-06-22 20:31:32.361858 | orchestrator | --- 192.168.112.165 ping statistics --- 2025-06-22 20:31:32.361871 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:31:32.361883 | orchestrator | rtt min/avg/max/mdev = 1.504/2.866/5.010/1.534 ms 2025-06-22 20:31:32.361911 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:31:32.361923 | orchestrator | + ping -c3 192.168.112.144 2025-06-22 20:31:32.373468 | orchestrator | PING 192.168.112.144 (192.168.112.144) 56(84) bytes of data. 2025-06-22 20:31:32.373563 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=1 ttl=63 time=7.24 ms 2025-06-22 20:31:33.371071 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=2 ttl=63 time=2.53 ms 2025-06-22 20:31:34.373674 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=3 ttl=63 time=2.40 ms 2025-06-22 20:31:34.373780 | orchestrator | 2025-06-22 20:31:34.373797 | orchestrator | --- 192.168.112.144 ping statistics --- 2025-06-22 20:31:34.373810 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:31:34.373821 | orchestrator | rtt min/avg/max/mdev = 2.398/4.052/7.235/2.250 ms 2025-06-22 20:31:34.373833 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:31:34.373844 | orchestrator | + ping -c3 192.168.112.112 2025-06-22 20:31:34.385623 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-06-22 20:31:34.385694 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=7.37 ms 2025-06-22 20:31:35.381159 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=1.98 ms 2025-06-22 20:31:36.383030 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=2.21 ms 2025-06-22 20:31:36.383134 | orchestrator | 2025-06-22 20:31:36.383150 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-06-22 20:31:36.383163 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-22 20:31:36.383174 | orchestrator | rtt min/avg/max/mdev = 1.979/3.850/7.366/2.487 ms 2025-06-22 20:31:36.383511 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-22 20:31:36.479300 | orchestrator | ok: Runtime: 0:10:20.606383 2025-06-22 20:31:36.526496 | 2025-06-22 20:31:36.526776 | TASK [Run tempest] 2025-06-22 20:31:37.072367 | orchestrator | skipping: Conditional result was False 2025-06-22 20:31:37.091328 | 2025-06-22 20:31:37.091518 | TASK [Check prometheus alert status] 2025-06-22 20:31:37.630442 | orchestrator | skipping: Conditional result was False 2025-06-22 20:31:37.632607 | 2025-06-22 20:31:37.632735 | PLAY RECAP 2025-06-22 20:31:37.632827 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-22 20:31:37.632861 | 2025-06-22 20:31:37.852257 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-22 20:31:37.855559 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-22 20:31:38.565933 | 2025-06-22 20:31:38.566056 | PLAY [Post output play] 2025-06-22 20:31:38.581240 | 2025-06-22 20:31:38.581377 | LOOP [stage-output : Register sources] 2025-06-22 20:31:38.636644 | 2025-06-22 20:31:38.636891 | TASK [stage-output : Check sudo] 2025-06-22 20:31:39.659120 | orchestrator | sudo: a password is required 2025-06-22 20:31:39.726686 | orchestrator | ok: Runtime: 0:00:00.217212 2025-06-22 20:31:39.739840 | 2025-06-22 20:31:39.739980 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-22 20:31:39.777964 | 2025-06-22 20:31:39.778283 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-22 20:31:39.851551 | orchestrator | ok 2025-06-22 20:31:39.858285 | 2025-06-22 20:31:39.858387 | LOOP [stage-output : Ensure target folders exist] 2025-06-22 20:31:40.350287 | orchestrator | ok: "docs" 2025-06-22 20:31:40.350604 | 2025-06-22 20:31:40.582221 | orchestrator | ok: "artifacts" 2025-06-22 20:31:40.827505 | orchestrator | ok: "logs" 2025-06-22 20:31:40.843073 | 2025-06-22 20:31:40.843230 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-22 20:31:40.875938 | 2025-06-22 20:31:40.876203 | TASK [stage-output : Make all log files readable] 2025-06-22 20:31:41.186740 | orchestrator | ok 2025-06-22 20:31:41.195599 | 2025-06-22 20:31:41.195710 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-22 20:31:41.229444 | orchestrator | skipping: Conditional result was False 2025-06-22 20:31:41.243647 | 2025-06-22 20:31:41.243773 | TASK [stage-output : Discover log files for compression] 2025-06-22 20:31:41.267118 | orchestrator | skipping: Conditional result was False 2025-06-22 20:31:41.278723 | 2025-06-22 20:31:41.278889 | LOOP [stage-output : Archive everything from logs] 2025-06-22 20:31:41.316574 | 2025-06-22 20:31:41.316697 | PLAY [Post cleanup play] 2025-06-22 20:31:41.324379 | 2025-06-22 20:31:41.324464 | TASK [Set cloud fact (Zuul deployment)] 2025-06-22 20:31:41.380763 | orchestrator | ok 2025-06-22 20:31:41.393972 | 2025-06-22 20:31:41.394089 | TASK [Set cloud fact (local deployment)] 2025-06-22 20:31:41.418686 | orchestrator | skipping: Conditional result was False 2025-06-22 20:31:41.431471 | 2025-06-22 20:31:41.431596 | TASK [Clean the cloud environment] 2025-06-22 20:31:42.732951 | orchestrator | 2025-06-22 20:31:42 - clean up servers 2025-06-22 20:31:43.482543 | orchestrator | 2025-06-22 20:31:43 - testbed-manager 2025-06-22 20:31:43.570729 | orchestrator | 2025-06-22 20:31:43 - testbed-node-4 2025-06-22 20:31:43.655812 | orchestrator | 2025-06-22 20:31:43 - testbed-node-3 2025-06-22 20:31:43.736220 | orchestrator | 2025-06-22 20:31:43 - testbed-node-0 2025-06-22 20:31:43.819488 | orchestrator | 2025-06-22 20:31:43 - testbed-node-1 2025-06-22 20:31:43.907102 | orchestrator | 2025-06-22 20:31:43 - testbed-node-2 2025-06-22 20:31:43.997618 | orchestrator | 2025-06-22 20:31:43 - testbed-node-5 2025-06-22 20:31:44.093073 | orchestrator | 2025-06-22 20:31:44 - clean up keypairs 2025-06-22 20:31:44.116462 | orchestrator | 2025-06-22 20:31:44 - testbed 2025-06-22 20:31:44.137481 | orchestrator | 2025-06-22 20:31:44 - wait for servers to be gone 2025-06-22 20:31:54.990995 | orchestrator | 2025-06-22 20:31:54 - clean up ports 2025-06-22 20:31:55.189882 | orchestrator | 2025-06-22 20:31:55 - 1ce45f4d-32c8-41f9-8cd7-033bad9a9a63 2025-06-22 20:31:55.463607 | orchestrator | 2025-06-22 20:31:55 - 32d20620-4cfd-4125-9ec9-35edeae39de9 2025-06-22 20:31:55.738569 | orchestrator | 2025-06-22 20:31:55 - 9876bcf0-9b40-4a98-bfc2-bc51aba945a5 2025-06-22 20:31:55.958533 | orchestrator | 2025-06-22 20:31:55 - a15c8862-17e6-4694-a47c-09e84b4f2c4d 2025-06-22 20:31:56.204660 | orchestrator | 2025-06-22 20:31:56 - aaa2d0ba-02b7-4cb8-ba65-cb5b041c13de 2025-06-22 20:31:56.609027 | orchestrator | 2025-06-22 20:31:56 - cc5def26-5d2a-462c-becb-e1a39a79bb17 2025-06-22 20:31:56.920395 | orchestrator | 2025-06-22 20:31:56 - e0a5ee73-9400-4c99-9daa-4ad1f5633898 2025-06-22 20:31:57.148940 | orchestrator | 2025-06-22 20:31:57 - clean up volumes 2025-06-22 20:31:57.264884 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-0-node-base 2025-06-22 20:31:57.309889 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-5-node-base 2025-06-22 20:31:57.351051 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-3-node-base 2025-06-22 20:31:57.396036 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-4-node-base 2025-06-22 20:31:57.444342 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-2-node-base 2025-06-22 20:31:57.491982 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-1-node-base 2025-06-22 20:31:57.539298 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-manager-base 2025-06-22 20:31:57.580725 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-0-node-3 2025-06-22 20:31:57.629893 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-5-node-5 2025-06-22 20:31:57.677646 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-3-node-3 2025-06-22 20:31:57.723259 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-1-node-4 2025-06-22 20:31:57.773216 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-2-node-5 2025-06-22 20:31:57.817902 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-8-node-5 2025-06-22 20:31:57.873134 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-7-node-4 2025-06-22 20:31:57.920134 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-4-node-4 2025-06-22 20:31:57.969865 | orchestrator | 2025-06-22 20:31:57 - testbed-volume-6-node-3 2025-06-22 20:31:58.017606 | orchestrator | 2025-06-22 20:31:58 - disconnect routers 2025-06-22 20:31:58.172074 | orchestrator | 2025-06-22 20:31:58 - testbed 2025-06-22 20:31:59.475584 | orchestrator | 2025-06-22 20:31:59 - clean up subnets 2025-06-22 20:31:59.519964 | orchestrator | 2025-06-22 20:31:59 - subnet-testbed-management 2025-06-22 20:31:59.685539 | orchestrator | 2025-06-22 20:31:59 - clean up networks 2025-06-22 20:31:59.846142 | orchestrator | 2025-06-22 20:31:59 - net-testbed-management 2025-06-22 20:32:00.150364 | orchestrator | 2025-06-22 20:32:00 - clean up security groups 2025-06-22 20:32:00.199788 | orchestrator | 2025-06-22 20:32:00 - testbed-management 2025-06-22 20:32:00.339271 | orchestrator | 2025-06-22 20:32:00 - testbed-node 2025-06-22 20:32:00.474409 | orchestrator | 2025-06-22 20:32:00 - clean up floating ips 2025-06-22 20:32:00.510784 | orchestrator | 2025-06-22 20:32:00 - 81.163.193.98 2025-06-22 20:32:00.896828 | orchestrator | 2025-06-22 20:32:00 - clean up routers 2025-06-22 20:32:01.017458 | orchestrator | 2025-06-22 20:32:01 - testbed 2025-06-22 20:32:01.992087 | orchestrator | ok: Runtime: 0:00:20.102151 2025-06-22 20:32:01.996771 | 2025-06-22 20:32:01.996949 | PLAY RECAP 2025-06-22 20:32:01.997111 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-22 20:32:01.997207 | 2025-06-22 20:32:02.145840 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-22 20:32:02.147211 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-22 20:32:02.925446 | 2025-06-22 20:32:02.925611 | PLAY [Cleanup play] 2025-06-22 20:32:02.941581 | 2025-06-22 20:32:02.941724 | TASK [Set cloud fact (Zuul deployment)] 2025-06-22 20:32:02.998182 | orchestrator | ok 2025-06-22 20:32:03.009312 | 2025-06-22 20:32:03.009495 | TASK [Set cloud fact (local deployment)] 2025-06-22 20:32:03.044560 | orchestrator | skipping: Conditional result was False 2025-06-22 20:32:03.064286 | 2025-06-22 20:32:03.064526 | TASK [Clean the cloud environment] 2025-06-22 20:32:04.225130 | orchestrator | 2025-06-22 20:32:04 - clean up servers 2025-06-22 20:32:04.686471 | orchestrator | 2025-06-22 20:32:04 - clean up keypairs 2025-06-22 20:32:04.704958 | orchestrator | 2025-06-22 20:32:04 - wait for servers to be gone 2025-06-22 20:32:04.747994 | orchestrator | 2025-06-22 20:32:04 - clean up ports 2025-06-22 20:32:04.828842 | orchestrator | 2025-06-22 20:32:04 - clean up volumes 2025-06-22 20:32:04.897656 | orchestrator | 2025-06-22 20:32:04 - disconnect routers 2025-06-22 20:32:04.927168 | orchestrator | 2025-06-22 20:32:04 - clean up subnets 2025-06-22 20:32:04.956070 | orchestrator | 2025-06-22 20:32:04 - clean up networks 2025-06-22 20:32:05.120747 | orchestrator | 2025-06-22 20:32:05 - clean up security groups 2025-06-22 20:32:05.161486 | orchestrator | 2025-06-22 20:32:05 - clean up floating ips 2025-06-22 20:32:05.190326 | orchestrator | 2025-06-22 20:32:05 - clean up routers 2025-06-22 20:32:05.603810 | orchestrator | ok: Runtime: 0:00:01.374528 2025-06-22 20:32:05.606187 | 2025-06-22 20:32:05.606289 | PLAY RECAP 2025-06-22 20:32:05.606387 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-22 20:32:05.606424 | 2025-06-22 20:32:05.755809 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-22 20:32:05.756833 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-22 20:32:06.544974 | 2025-06-22 20:32:06.545156 | PLAY [Base post-fetch] 2025-06-22 20:32:06.561653 | 2025-06-22 20:32:06.561813 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-22 20:32:06.617107 | orchestrator | skipping: Conditional result was False 2025-06-22 20:32:06.630941 | 2025-06-22 20:32:06.631179 | TASK [fetch-output : Set log path for single node] 2025-06-22 20:32:06.690328 | orchestrator | ok 2025-06-22 20:32:06.699755 | 2025-06-22 20:32:06.699913 | LOOP [fetch-output : Ensure local output dirs] 2025-06-22 20:32:07.199845 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/a65c4017b01b42e2a4146bccaa6b7607/work/logs" 2025-06-22 20:32:07.517686 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a65c4017b01b42e2a4146bccaa6b7607/work/artifacts" 2025-06-22 20:32:07.816099 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a65c4017b01b42e2a4146bccaa6b7607/work/docs" 2025-06-22 20:32:07.840960 | 2025-06-22 20:32:07.841216 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-22 20:32:08.798470 | orchestrator | changed: .d..t...... ./ 2025-06-22 20:32:08.798800 | orchestrator | changed: All items complete 2025-06-22 20:32:08.798877 | 2025-06-22 20:32:09.528100 | orchestrator | changed: .d..t...... ./ 2025-06-22 20:32:10.296543 | orchestrator | changed: .d..t...... ./ 2025-06-22 20:32:10.324472 | 2025-06-22 20:32:10.324627 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-22 20:32:10.359402 | orchestrator | skipping: Conditional result was False 2025-06-22 20:32:10.362119 | orchestrator | skipping: Conditional result was False 2025-06-22 20:32:10.387106 | 2025-06-22 20:32:10.387245 | PLAY RECAP 2025-06-22 20:32:10.387367 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-22 20:32:10.387416 | 2025-06-22 20:32:10.527754 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-22 20:32:10.530166 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-22 20:32:11.297796 | 2025-06-22 20:32:11.297970 | PLAY [Base post] 2025-06-22 20:32:11.313513 | 2025-06-22 20:32:11.313661 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-22 20:32:12.879863 | orchestrator | changed 2025-06-22 20:32:12.890982 | 2025-06-22 20:32:12.891130 | PLAY RECAP 2025-06-22 20:32:12.891208 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-22 20:32:12.891284 | 2025-06-22 20:32:13.018096 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-22 20:32:13.021061 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-22 20:32:13.817960 | 2025-06-22 20:32:13.818132 | PLAY [Base post-logs] 2025-06-22 20:32:13.828833 | 2025-06-22 20:32:13.828975 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-22 20:32:14.285573 | localhost | changed 2025-06-22 20:32:14.304224 | 2025-06-22 20:32:14.304441 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-22 20:32:14.345460 | localhost | ok 2025-06-22 20:32:14.350980 | 2025-06-22 20:32:14.351129 | TASK [Set zuul-log-path fact] 2025-06-22 20:32:14.379095 | localhost | ok 2025-06-22 20:32:14.392222 | 2025-06-22 20:32:14.392401 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-22 20:32:14.430530 | localhost | ok 2025-06-22 20:32:14.436921 | 2025-06-22 20:32:14.437076 | TASK [upload-logs : Create log directories] 2025-06-22 20:32:14.947269 | localhost | changed 2025-06-22 20:32:14.952585 | 2025-06-22 20:32:14.952746 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-22 20:32:15.469943 | localhost -> localhost | ok: Runtime: 0:00:00.009691 2025-06-22 20:32:15.479277 | 2025-06-22 20:32:15.479512 | TASK [upload-logs : Upload logs to log server] 2025-06-22 20:32:16.079300 | localhost | Output suppressed because no_log was given 2025-06-22 20:32:16.082257 | 2025-06-22 20:32:16.082441 | LOOP [upload-logs : Compress console log and json output] 2025-06-22 20:32:16.138012 | localhost | skipping: Conditional result was False 2025-06-22 20:32:16.142389 | localhost | skipping: Conditional result was False 2025-06-22 20:32:16.155910 | 2025-06-22 20:32:16.156235 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-22 20:32:16.205161 | localhost | skipping: Conditional result was False 2025-06-22 20:32:16.205659 | 2025-06-22 20:32:16.209608 | localhost | skipping: Conditional result was False 2025-06-22 20:32:16.214001 | 2025-06-22 20:32:16.214123 | LOOP [upload-logs : Upload console log and json output]